You can subscribe to this list here.
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(4) |
Jul
|
Aug
|
Sep
(1) |
Oct
(4) |
Nov
(1) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2012 |
Jan
(1) |
Feb
(8) |
Mar
|
Apr
(1) |
May
(3) |
Jun
(13) |
Jul
(7) |
Aug
(11) |
Sep
(6) |
Oct
(14) |
Nov
(16) |
Dec
(1) |
2013 |
Jan
(3) |
Feb
(8) |
Mar
(17) |
Apr
(21) |
May
(27) |
Jun
(11) |
Jul
(11) |
Aug
(21) |
Sep
(39) |
Oct
(17) |
Nov
(39) |
Dec
(28) |
2014 |
Jan
(36) |
Feb
(30) |
Mar
(35) |
Apr
(17) |
May
(22) |
Jun
(28) |
Jul
(23) |
Aug
(41) |
Sep
(17) |
Oct
(10) |
Nov
(22) |
Dec
(56) |
2015 |
Jan
(30) |
Feb
(32) |
Mar
(37) |
Apr
(28) |
May
(79) |
Jun
(18) |
Jul
(35) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Kirill K. <kir...@sm...> - 2015-05-11 21:45:32
|
> -----Original Message----- > From: Jan Trmal [mailto:jt...@gm...] > Sent: 2015-05-11 0201 > > Kirill, there is no reason why wait. We should start working on it. > I propose you could first send the Kaldi code patches -- I actually > worked on this and I'm able to compile Kaldi, so this should be quite > straightforward, provided you will generate the diff against the HEAD. Sorry, did not address this part in the previous message. SGTM, I'll replay my repo changes on top of a github fork, and then we can start from there. I'm sure to hit a couple snags doing that, so give me a few days. Did you look at https://github.com/workflow-demo-org/workflow-demo/wiki/GitHub-pull-request-based-workflow? Too basic, too complex, any help from this at all? -kkm |
From: Daniel P. <dp...@gm...> - 2015-05-11 21:44:35
|
OK fine. I'm pretty sure this is a bug in MSVC, that the namespace lookup for a templated function happens in the context in which the template is evaluated (as opposed to where it was defined). But it makes sense to patch it anyway. Dan On Mon, May 11, 2015 at 2:09 PM, Jan Trmal <jt...@gm...> wrote: > Ok, I will have a.look tmrw. I think I've resolved the types issue (last one > you mentioned). I generated the patch for openfst, that defines the int > types within the openfst namespace - originally they exist on.the highest > scope (i.e. ::int32) > The patch has something like > namespace fst { > typedef int32 ::int32; > } > > Doping this, the lookup works fine even under the ms c++. > To me, this is optimal solution as we have to patch the openfst anyway (and > my change wont break any code). > You can.have a look ať the openfstwin-1.3.3.patch in tools/extras. > I am open to any suggestion or general discussion on how this should be made > better. > Y. > > On May 11, 2015 9:03 PM, "Daniel Povey" <dp...@gm...> wrote: >> >> > Well, the build was not "painless" at all. In fact, the perl script >> > generated a solution with 600+ projects that simply failed to build. VS was >> > churning out dependencies on these, and crashed in about 10 minutes of just >> > sitting there with a 100% CPU use. Not a single file was compiled, so I >> > think it was just autogenerating dependencies at this point. >> >> Build systems on Windows are pretty broken. I think in Microsoft they >> use some other system, not the one used in VS, which is not publicly >> released. >> >> > I did not try to build from the command line though. Am I understanding >> > it would compile? >> >> No, not everything would compile. Sometimes the VS compiler just crashes. >> >> > What I did not like about the existing system is that I could not plug >> > in CUDA or intel compiler. It is extremely rigid in the option part, and to >> > change compiler options you are most likely to change the perl code that >> > generates the project files, or modify the different MSBuild include files >> > (called "property sheets" in this context) that are interplaying quite >> > non-trivially. No, I have spent a few days on the build system not for a >> > love of tinkering with MSBuils at all! :) >> >> What we committed was just a hack to try to get something working. >> IIRC we just looked at the build files generated by VS to try to guess >> what the format of those files was supposed to be; I'm not sure that >> they are really intended to be human readable/writable (unless that >> human really loves XML). >> >> > Anyway, you can look at it at >> > https://github.com/kkm000/kaldi/tree/winbuild/msbuild. (NB I have not >> > updated the branch in a while) If you think this is not an acceptable >> > solution, I'm just dropping the ball, no offence taken, naturally. I can >> > maintain this as an "unofficial port" for a while. In any case, build is the >> > least problem, as Dan has pointed out already... >> > >> > The biggest (in the number of changed lines) changes I had to make were >> > in the namespace using declarations, because of conflicts between the types >> > int32 and friends from different namespaces. I ended up reading the C++ >> > standard to understand whether gcc, icl or msvc is "correct" in different >> > cases they are complaining about, and the standard appears not to define any >> > behavior there. These changes I would prefer accepted into the mainline of >> > development, because they are harder to maintain in a fork. >> >> In any case where you have to do something like typedef kaldi::int32 >> int32, we should definitely get this committed- talk to Yenda. >> Actually, this is really a bug in OpenFST which they haven't fixed >> yet. There is a "proper" POSIX way to get types like int32, which we >> use in Kaldi, but in OpenFst they do it in a different, ad-hoc way >> which sometimes generates different types (e.g. on a system where int >> and long int are both 32-bit). So the Kaldi int32 and the OpenFst >> int32 end up being different, incompatible types. >> >> Dan >> >> >> >> -----Original Message----- >> >> From: Jan Trmal [mailto:jt...@gm...] >> >> Sent: 2015-05-11 0201 >> >> To: Dan Povey >> >> Cc: Kirill Katsnelson; kal...@li...; kaldi- >> >> us...@li...; 洪孝宗 >> >> Subject: Re: [Kaldi-users] Kaldi compiles now under VS 2013 >> >> >> >> Guys, the migration to git is "unofficially" done -- unless we find >> >> some problems with it, the repo (https://github.com/kaldi- >> >> asr/kaldi.git) will stay as it is now. >> >> >> >> Kirill, there is no reason why wait. We should start working on it. >> >> I propose you could first send the Kaldi code patches -- I actually >> >> worked on this and I'm able to compile Kaldi, so this should be quite >> >> straightforward, provided you will generate the diff against the HEAD. >> >> >> >> Then we can start looking at the "build system". I understand your >> >> reasons and I think they might be legit. My feeling is, however, that a >> >> lot of people are not using MSBuild and will expect the MSVC solution, >> >> just because this is how they usually work. I haven't tested it, but >> >> you should be able to use the current SLN to build the binaries from >> >> command line -- that should be completely painless -- but again, I >> >> haven't tested it. >> >> y. >> >> >> >> >> >> On Thu, May 7, 2015 at 10:53 PM, Daniel Povey <dp...@gm...> wrote: >> >> >> >> >> >> >> To the extent that your patches are simple fixes and minor >> >> changes to >> >> >> the code, I think we could apply them. Perhaps you could work >> >> with >> >> >> Yenda to get your changes checked? >> >> > >> >> > Easy enough, and as easy to hold till the migration is done. >> >> Was there a change in the plans? >> >> >> >> Not really a change in the plans, but we don't have a definite >> >> timeline for migrating to git, and it could be that we'll keep >> >> the svn >> >> as upstream for a while. Your changes seem reasonable, but I >> >> think it >> >> would be better to get things started now rather than later. >> >> The >> >> git/svn issues should not be that hard. Just start sending >> >> patches to >> >> Yenda and we'll handle your changes bit by bit. >> >> Dan >> >> >> >> >> >> >> >> > >> >> >> If you update your git repo to >> >> >> point to the current code, then a patch should be applicable >> >> by the >> >> >> program "patch" to the svn repo too. >> >> > >> >> > There are many changes, and I certainly want to split the patch >> >> so it is readable. One megapatch is not reviewable. >> >> > >> >> >> Regarding your build approach, you could send me and Yenda >> >> some details >> >> >> about how you did it, and we could consider whether to support >> >> that. >> >> > >> >> > There's a Powershell script that takes information from every >> >> src/*/Makefile into a simple declarative MSBuild script >> >> $dirname.kwproj, and a top-level MSBuild "makefile" that drives the >> >> whole thing. A little bit more complex than that to allow for user- >> >> defined options, but generally this is it. This supports building >> >> libraries, tools, tests and running the tests with command line >> >> switches. The whole thing pretty much mimics a linux make process but >> >> using MS tools. >> >> > >> >> >> I don't think it makes sense to try to maintain a parallel >> >> "windows- >> >> >> compatible" version of the scripts, if there are larger >> >> changes >> >> >> required there. >> >> > >> >> > I was targeting for no changes at all. There maybe a few very >> >> small patches that supposed not to break compatibility. >> >> > >> >> >> Anyway you depend on cygwin for scripting, and the set >> >> >> of people who want to run cygwin and build with MSVC is >> >> probably >> >> >> limited enough that I don't think it makes sense for us to try >> >> to >> >> >> support it. >> >> > >> >> > This is not as simple a choice as it seems at first. Alex Hung >> >> just posted a trick to build CUDA under Cygwin; before he did I thought >> >> it was not possible. MKL is another story. I would rather build under >> >> native Windows than try to pull this monster into the Cygwin build >> >> environment. >> >> > >> >> > -kkm >> >> > >> >> >> >> >> > |
From: Jan T. <jt...@gm...> - 2015-05-11 21:09:49
|
Ok, I will have a.look tmrw. I think I've resolved the types issue (last one you mentioned). I generated the patch for openfst, that defines the int types within the openfst namespace - originally they exist on.the highest scope (i.e. ::int32) The patch has something like namespace fst { typedef int32 ::int32; } Doping this, the lookup works fine even under the ms c++. To me, this is optimal solution as we have to patch the openfst anyway (and my change wont break any code). You can.have a look ať the openfstwin-1.3.3.patch in tools/extras. I am open to <http://am.open.to/> any suggestion or general discussion on how this should be made better. Y. On May 11, 2015 9:03 PM, "Daniel Povey" <dp...@gm...> wrote: > > Well, the build was not "painless" at all. In fact, the perl script > generated a solution with 600+ projects that simply failed to build. VS was > churning out dependencies on these, and crashed in about 10 minutes of just > sitting there with a 100% CPU use. Not a single file was compiled, so I > think it was just autogenerating dependencies at this point. > > Build systems on Windows are pretty broken. I think in Microsoft they > use some other system, not the one used in VS, which is not publicly > released. > > > I did not try to build from the command line though. Am I understanding > it would compile? > > No, not everything would compile. Sometimes the VS compiler just crashes. > > > What I did not like about the existing system is that I could not plug > in CUDA or intel compiler. It is extremely rigid in the option part, and to > change compiler options you are most likely to change the perl code that > generates the project files, or modify the different MSBuild include files > (called "property sheets" in this context) that are interplaying quite > non-trivially. No, I have spent a few days on the build system not for a > love of tinkering with MSBuils at all! :) > > What we committed was just a hack to try to get something working. > IIRC we just looked at the build files generated by VS to try to guess > what the format of those files was supposed to be; I'm not sure that > they are really intended to be human readable/writable (unless that > human really loves XML). > > > Anyway, you can look at it at > https://github.com/kkm000/kaldi/tree/winbuild/msbuild. (NB I have not > updated the branch in a while) If you think this is not an acceptable > solution, I'm just dropping the ball, no offence taken, naturally. I can > maintain this as an "unofficial port" for a while. In any case, build is > the least problem, as Dan has pointed out already... > > > > The biggest (in the number of changed lines) changes I had to make were > in the namespace using declarations, because of conflicts between the types > int32 and friends from different namespaces. I ended up reading the C++ > standard to understand whether gcc, icl or msvc is "correct" in different > cases they are complaining about, and the standard appears not to define > any behavior there. These changes I would prefer accepted into the mainline > of development, because they are harder to maintain in a fork. > > In any case where you have to do something like typedef kaldi::int32 > int32, we should definitely get this committed- talk to Yenda. > Actually, this is really a bug in OpenFST which they haven't fixed > yet. There is a "proper" POSIX way to get types like int32, which we > use in Kaldi, but in OpenFst they do it in a different, ad-hoc way > which sometimes generates different types (e.g. on a system where int > and long int are both 32-bit). So the Kaldi int32 and the OpenFst > int32 end up being different, incompatible types. > > Dan > > > >> -----Original Message----- > >> From: Jan Trmal [mailto:jt...@gm...] > >> Sent: 2015-05-11 0201 > >> To: Dan Povey > >> Cc: Kirill Katsnelson; kal...@li...; kaldi- > >> us...@li...; 洪孝宗 > >> Subject: Re: [Kaldi-users] Kaldi compiles now under VS 2013 > >> > >> Guys, the migration to git is "unofficially" done -- unless we find > >> some problems with it, the repo (https://github.com/kaldi- > >> asr/kaldi.git) will stay as it is now. > >> > >> Kirill, there is no reason why wait. We should start working on it. > >> I propose you could first send the Kaldi code patches -- I actually > >> worked on this and I'm able to compile Kaldi, so this should be quite > >> straightforward, provided you will generate the diff against the HEAD. > >> > >> Then we can start looking at the "build system". I understand your > >> reasons and I think they might be legit. My feeling is, however, that a > >> lot of people are not using MSBuild and will expect the MSVC solution, > >> just because this is how they usually work. I haven't tested it, but > >> you should be able to use the current SLN to build the binaries from > >> command line -- that should be completely painless -- but again, I > >> haven't tested it. > >> y. > >> > >> > >> On Thu, May 7, 2015 at 10:53 PM, Daniel Povey <dp...@gm...> wrote: > >> > >> > >> >> To the extent that your patches are simple fixes and minor > >> changes to > >> >> the code, I think we could apply them. Perhaps you could work > >> with > >> >> Yenda to get your changes checked? > >> > > >> > Easy enough, and as easy to hold till the migration is done. > >> Was there a change in the plans? > >> > >> Not really a change in the plans, but we don't have a definite > >> timeline for migrating to git, and it could be that we'll keep > >> the svn > >> as upstream for a while. Your changes seem reasonable, but I > >> think it > >> would be better to get things started now rather than later. > >> The > >> git/svn issues should not be that hard. Just start sending > >> patches to > >> Yenda and we'll handle your changes bit by bit. > >> Dan > >> > >> > >> > >> > > >> >> If you update your git repo to > >> >> point to the current code, then a patch should be applicable > >> by the > >> >> program "patch" to the svn repo too. > >> > > >> > There are many changes, and I certainly want to split the patch > >> so it is readable. One megapatch is not reviewable. > >> > > >> >> Regarding your build approach, you could send me and Yenda > >> some details > >> >> about how you did it, and we could consider whether to support > >> that. > >> > > >> > There's a Powershell script that takes information from every > >> src/*/Makefile into a simple declarative MSBuild script > >> $dirname.kwproj, and a top-level MSBuild "makefile" that drives the > >> whole thing. A little bit more complex than that to allow for user- > >> defined options, but generally this is it. This supports building > >> libraries, tools, tests and running the tests with command line > >> switches. The whole thing pretty much mimics a linux make process but > >> using MS tools. > >> > > >> >> I don't think it makes sense to try to maintain a parallel > >> "windows- > >> >> compatible" version of the scripts, if there are larger > >> changes > >> >> required there. > >> > > >> > I was targeting for no changes at all. There maybe a few very > >> small patches that supposed not to break compatibility. > >> > > >> >> Anyway you depend on cygwin for scripting, and the set > >> >> of people who want to run cygwin and build with MSVC is > >> probably > >> >> limited enough that I don't think it makes sense for us to try > >> to > >> >> support it. > >> > > >> > This is not as simple a choice as it seems at first. Alex Hung > >> just posted a trick to build CUDA under Cygwin; before he did I thought > >> it was not possible. MKL is another story. I would rather build under > >> native Windows than try to pull this monster into the Cygwin build > >> environment. > >> > > >> > -kkm > >> > > >> > >> > > > |
From: Daniel P. <dp...@gm...> - 2015-05-11 19:03:44
|
> Well, the build was not "painless" at all. In fact, the perl script generated a solution with 600+ projects that simply failed to build. VS was churning out dependencies on these, and crashed in about 10 minutes of just sitting there with a 100% CPU use. Not a single file was compiled, so I think it was just autogenerating dependencies at this point. Build systems on Windows are pretty broken. I think in Microsoft they use some other system, not the one used in VS, which is not publicly released. > I did not try to build from the command line though. Am I understanding it would compile? No, not everything would compile. Sometimes the VS compiler just crashes. > What I did not like about the existing system is that I could not plug in CUDA or intel compiler. It is extremely rigid in the option part, and to change compiler options you are most likely to change the perl code that generates the project files, or modify the different MSBuild include files (called "property sheets" in this context) that are interplaying quite non-trivially. No, I have spent a few days on the build system not for a love of tinkering with MSBuils at all! :) What we committed was just a hack to try to get something working. IIRC we just looked at the build files generated by VS to try to guess what the format of those files was supposed to be; I'm not sure that they are really intended to be human readable/writable (unless that human really loves XML). > Anyway, you can look at it at https://github.com/kkm000/kaldi/tree/winbuild/msbuild. (NB I have not updated the branch in a while) If you think this is not an acceptable solution, I'm just dropping the ball, no offence taken, naturally. I can maintain this as an "unofficial port" for a while. In any case, build is the least problem, as Dan has pointed out already... > > The biggest (in the number of changed lines) changes I had to make were in the namespace using declarations, because of conflicts between the types int32 and friends from different namespaces. I ended up reading the C++ standard to understand whether gcc, icl or msvc is "correct" in different cases they are complaining about, and the standard appears not to define any behavior there. These changes I would prefer accepted into the mainline of development, because they are harder to maintain in a fork. In any case where you have to do something like typedef kaldi::int32 int32, we should definitely get this committed- talk to Yenda. Actually, this is really a bug in OpenFST which they haven't fixed yet. There is a "proper" POSIX way to get types like int32, which we use in Kaldi, but in OpenFst they do it in a different, ad-hoc way which sometimes generates different types (e.g. on a system where int and long int are both 32-bit). So the Kaldi int32 and the OpenFst int32 end up being different, incompatible types. Dan >> -----Original Message----- >> From: Jan Trmal [mailto:jt...@gm...] >> Sent: 2015-05-11 0201 >> To: Dan Povey >> Cc: Kirill Katsnelson; kal...@li...; kaldi- >> us...@li...; 洪孝宗 >> Subject: Re: [Kaldi-users] Kaldi compiles now under VS 2013 >> >> Guys, the migration to git is "unofficially" done -- unless we find >> some problems with it, the repo (https://github.com/kaldi- >> asr/kaldi.git) will stay as it is now. >> >> Kirill, there is no reason why wait. We should start working on it. >> I propose you could first send the Kaldi code patches -- I actually >> worked on this and I'm able to compile Kaldi, so this should be quite >> straightforward, provided you will generate the diff against the HEAD. >> >> Then we can start looking at the "build system". I understand your >> reasons and I think they might be legit. My feeling is, however, that a >> lot of people are not using MSBuild and will expect the MSVC solution, >> just because this is how they usually work. I haven't tested it, but >> you should be able to use the current SLN to build the binaries from >> command line -- that should be completely painless -- but again, I >> haven't tested it. >> y. >> >> >> On Thu, May 7, 2015 at 10:53 PM, Daniel Povey <dp...@gm...> wrote: >> >> >> >> To the extent that your patches are simple fixes and minor >> changes to >> >> the code, I think we could apply them. Perhaps you could work >> with >> >> Yenda to get your changes checked? >> > >> > Easy enough, and as easy to hold till the migration is done. >> Was there a change in the plans? >> >> Not really a change in the plans, but we don't have a definite >> timeline for migrating to git, and it could be that we'll keep >> the svn >> as upstream for a while. Your changes seem reasonable, but I >> think it >> would be better to get things started now rather than later. >> The >> git/svn issues should not be that hard. Just start sending >> patches to >> Yenda and we'll handle your changes bit by bit. >> Dan >> >> >> >> > >> >> If you update your git repo to >> >> point to the current code, then a patch should be applicable >> by the >> >> program "patch" to the svn repo too. >> > >> > There are many changes, and I certainly want to split the patch >> so it is readable. One megapatch is not reviewable. >> > >> >> Regarding your build approach, you could send me and Yenda >> some details >> >> about how you did it, and we could consider whether to support >> that. >> > >> > There's a Powershell script that takes information from every >> src/*/Makefile into a simple declarative MSBuild script >> $dirname.kwproj, and a top-level MSBuild "makefile" that drives the >> whole thing. A little bit more complex than that to allow for user- >> defined options, but generally this is it. This supports building >> libraries, tools, tests and running the tests with command line >> switches. The whole thing pretty much mimics a linux make process but >> using MS tools. >> > >> >> I don't think it makes sense to try to maintain a parallel >> "windows- >> >> compatible" version of the scripts, if there are larger >> changes >> >> required there. >> > >> > I was targeting for no changes at all. There maybe a few very >> small patches that supposed not to break compatibility. >> > >> >> Anyway you depend on cygwin for scripting, and the set >> >> of people who want to run cygwin and build with MSVC is >> probably >> >> limited enough that I don't think it makes sense for us to try >> to >> >> support it. >> > >> > This is not as simple a choice as it seems at first. Alex Hung >> just posted a trick to build CUDA under Cygwin; before he did I thought >> it was not possible. MKL is another story. I would rather build under >> native Windows than try to pull this monster into the Cygwin build >> environment. >> > >> > -kkm >> > >> >> > |
From: Kirill K. <kir...@sm...> - 2015-05-11 18:36:51
|
Well, the build was not "painless" at all. In fact, the perl script generated a solution with 600+ projects that simply failed to build. VS was churning out dependencies on these, and crashed in about 10 minutes of just sitting there with a 100% CPU use. Not a single file was compiled, so I think it was just autogenerating dependencies at this point. I did not try to build from the command line though. Am I understanding it would compile? What I did not like about the existing system is that I could not plug in CUDA or intel compiler. It is extremely rigid in the option part, and to change compiler options you are most likely to change the perl code that generates the project files, or modify the different MSBuild include files (called "property sheets" in this context) that are interplaying quite non-trivially. No, I have spent a few days on the build system not for a love of tinkering with MSBuils at all! :) Anyway, you can look at it at https://github.com/kkm000/kaldi/tree/winbuild/msbuild. (NB I have not updated the branch in a while) If you think this is not an acceptable solution, I'm just dropping the ball, no offence taken, naturally. I can maintain this as an "unofficial port" for a while. In any case, build is the least problem, as Dan has pointed out already... The biggest (in the number of changed lines) changes I had to make were in the namespace using declarations, because of conflicts between the types int32 and friends from different namespaces. I ended up reading the C++ standard to understand whether gcc, icl or msvc is "correct" in different cases they are complaining about, and the standard appears not to define any behavior there. These changes I would prefer accepted into the mainline of development, because they are harder to maintain in a fork. -kkm > -----Original Message----- > From: Jan Trmal [mailto:jt...@gm...] > Sent: 2015-05-11 0201 > To: Dan Povey > Cc: Kirill Katsnelson; kal...@li...; kaldi- > us...@li...; 洪孝宗 > Subject: Re: [Kaldi-users] Kaldi compiles now under VS 2013 > > Guys, the migration to git is "unofficially" done -- unless we find > some problems with it, the repo (https://github.com/kaldi- > asr/kaldi.git) will stay as it is now. > > Kirill, there is no reason why wait. We should start working on it. > I propose you could first send the Kaldi code patches -- I actually > worked on this and I'm able to compile Kaldi, so this should be quite > straightforward, provided you will generate the diff against the HEAD. > > Then we can start looking at the "build system". I understand your > reasons and I think they might be legit. My feeling is, however, that a > lot of people are not using MSBuild and will expect the MSVC solution, > just because this is how they usually work. I haven't tested it, but > you should be able to use the current SLN to build the binaries from > command line -- that should be completely painless -- but again, I > haven't tested it. > y. > > > On Thu, May 7, 2015 at 10:53 PM, Daniel Povey <dp...@gm...> wrote: > > > >> To the extent that your patches are simple fixes and minor > changes to > >> the code, I think we could apply them. Perhaps you could work > with > >> Yenda to get your changes checked? > > > > Easy enough, and as easy to hold till the migration is done. > Was there a change in the plans? > > Not really a change in the plans, but we don't have a definite > timeline for migrating to git, and it could be that we'll keep > the svn > as upstream for a while. Your changes seem reasonable, but I > think it > would be better to get things started now rather than later. > The > git/svn issues should not be that hard. Just start sending > patches to > Yenda and we'll handle your changes bit by bit. > Dan > > > > > > >> If you update your git repo to > >> point to the current code, then a patch should be applicable > by the > >> program "patch" to the svn repo too. > > > > There are many changes, and I certainly want to split the patch > so it is readable. One megapatch is not reviewable. > > > >> Regarding your build approach, you could send me and Yenda > some details > >> about how you did it, and we could consider whether to support > that. > > > > There's a Powershell script that takes information from every > src/*/Makefile into a simple declarative MSBuild script > $dirname.kwproj, and a top-level MSBuild "makefile" that drives the > whole thing. A little bit more complex than that to allow for user- > defined options, but generally this is it. This supports building > libraries, tools, tests and running the tests with command line > switches. The whole thing pretty much mimics a linux make process but > using MS tools. > > > >> I don't think it makes sense to try to maintain a parallel > "windows- > >> compatible" version of the scripts, if there are larger > changes > >> required there. > > > > I was targeting for no changes at all. There maybe a few very > small patches that supposed not to break compatibility. > > > >> Anyway you depend on cygwin for scripting, and the set > >> of people who want to run cygwin and build with MSVC is > probably > >> limited enough that I don't think it makes sense for us to try > to > >> support it. > > > > This is not as simple a choice as it seems at first. Alex Hung > just posted a trick to build CUDA under Cygwin; before he did I thought > it was not possible. MKL is another story. I would rather build under > native Windows than try to pull this monster into the Cygwin build > environment. > > > > -kkm > > > > |
From: Jan T. <jt...@gm...> - 2015-05-11 09:01:20
|
Guys, the migration to git is "unofficially" done -- unless we find some problems with it, the repo (https://github.com/kaldi-asr/kaldi.git) will stay as it is now. Kirill, there is no reason why wait. We should start working on it. I propose you could first send the Kaldi code patches -- I actually worked on this and I'm able to compile Kaldi, so this should be quite straightforward, provided you will generate the diff against the HEAD. Then we can start looking at the "build system". I understand your reasons and I think they might be legit. My feeling is, however, that a lot of people are not using MSBuild and will expect the MSVC solution, just because this is how they usually work. I haven't tested it, but you should be able to use the current SLN to build the binaries from command line -- that should be completely painless -- but again, I haven't tested it. y. On Thu, May 7, 2015 at 10:53 PM, Daniel Povey <dp...@gm...> wrote: > >> To the extent that your patches are simple fixes and minor changes to > >> the code, I think we could apply them. Perhaps you could work with > >> Yenda to get your changes checked? > > > > Easy enough, and as easy to hold till the migration is done. Was there a > change in the plans? > > Not really a change in the plans, but we don't have a definite > timeline for migrating to git, and it could be that we'll keep the svn > as upstream for a while. Your changes seem reasonable, but I think it > would be better to get things started now rather than later. The > git/svn issues should not be that hard. Just start sending patches to > Yenda and we'll handle your changes bit by bit. > Dan > > > > > >> If you update your git repo to > >> point to the current code, then a patch should be applicable by the > >> program "patch" to the svn repo too. > > > > There are many changes, and I certainly want to split the patch so it is > readable. One megapatch is not reviewable. > > > >> Regarding your build approach, you could send me and Yenda some details > >> about how you did it, and we could consider whether to support that. > > > > There's a Powershell script that takes information from every > src/*/Makefile into a simple declarative MSBuild script $dirname.kwproj, > and a top-level MSBuild "makefile" that drives the whole thing. A little > bit more complex than that to allow for user-defined options, but generally > this is it. This supports building libraries, tools, tests and running the > tests with command line switches. The whole thing pretty much mimics a > linux make process but using MS tools. > > > >> I don't think it makes sense to try to maintain a parallel "windows- > >> compatible" version of the scripts, if there are larger changes > >> required there. > > > > I was targeting for no changes at all. There maybe a few very small > patches that supposed not to break compatibility. > > > >> Anyway you depend on cygwin for scripting, and the set > >> of people who want to run cygwin and build with MSVC is probably > >> limited enough that I don't think it makes sense for us to try to > >> support it. > > > > This is not as simple a choice as it seems at first. Alex Hung just > posted a trick to build CUDA under Cygwin; before he did I thought it was > not possible. MKL is another story. I would rather build under native > Windows than try to pull this monster into the Cygwin build environment. > > > > -kkm > > > |
From: Daniel P. <dp...@gm...> - 2015-05-07 20:53:22
|
>> To the extent that your patches are simple fixes and minor changes to >> the code, I think we could apply them. Perhaps you could work with >> Yenda to get your changes checked? > > Easy enough, and as easy to hold till the migration is done. Was there a change in the plans? Not really a change in the plans, but we don't have a definite timeline for migrating to git, and it could be that we'll keep the svn as upstream for a while. Your changes seem reasonable, but I think it would be better to get things started now rather than later. The git/svn issues should not be that hard. Just start sending patches to Yenda and we'll handle your changes bit by bit. Dan > >> If you update your git repo to >> point to the current code, then a patch should be applicable by the >> program "patch" to the svn repo too. > > There are many changes, and I certainly want to split the patch so it is readable. One megapatch is not reviewable. > >> Regarding your build approach, you could send me and Yenda some details >> about how you did it, and we could consider whether to support that. > > There's a Powershell script that takes information from every src/*/Makefile into a simple declarative MSBuild script $dirname.kwproj, and a top-level MSBuild "makefile" that drives the whole thing. A little bit more complex than that to allow for user-defined options, but generally this is it. This supports building libraries, tools, tests and running the tests with command line switches. The whole thing pretty much mimics a linux make process but using MS tools. > >> I don't think it makes sense to try to maintain a parallel "windows- >> compatible" version of the scripts, if there are larger changes >> required there. > > I was targeting for no changes at all. There maybe a few very small patches that supposed not to break compatibility. > >> Anyway you depend on cygwin for scripting, and the set >> of people who want to run cygwin and build with MSVC is probably >> limited enough that I don't think it makes sense for us to try to >> support it. > > This is not as simple a choice as it seems at first. Alex Hung just posted a trick to build CUDA under Cygwin; before he did I thought it was not possible. MKL is another story. I would rather build under native Windows than try to pull this monster into the Cygwin build environment. > > -kkm > |
From: Kirill K. <kir...@sm...> - 2015-05-07 20:02:48
|
> -----Original Message----- > From: Tony Robinson [mailto:to...@sp...] > Sent: 2015-05-06 0001 > > "its not a bug, it's feature" of CUDA 6 called unified memory - > http://devblogs.nvidia.com/parallelforall/unified-memory-in-cuda-6/ > (the idea is automatic transfer from CPU to GPU memory - personally I > don't like it). Would you mind sharing why not? I read the link and the UM seems a good feature to me--if used carefully. I got an impression that the synchronization of accesses from GPU and CPU will be a funky thing to implement correctly. Is this what you are referring to, or am I missing other points? -kkm |
From: Kirill K. <kir...@sm...> - 2015-05-07 19:25:49
|
> -----Original Message----- > From: Daniel Povey [mailto:dp...@gm...] > Sent: 2015-05-07 1109 > Subject: Re: [Kaldi-users] Kaldi compiles now under VS 2013 > > To the extent that your patches are simple fixes and minor changes to > the code, I think we could apply them. Perhaps you could work with > Yenda to get your changes checked? Easy enough, and as easy to hold till the migration is done. Was there a change in the plans? > If you update your git repo to > point to the current code, then a patch should be applicable by the > program "patch" to the svn repo too. There are many changes, and I certainly want to split the patch so it is readable. One megapatch is not reviewable. > Regarding your build approach, you could send me and Yenda some details > about how you did it, and we could consider whether to support that. There's a Powershell script that takes information from every src/*/Makefile into a simple declarative MSBuild script $dirname.kwproj, and a top-level MSBuild "makefile" that drives the whole thing. A little bit more complex than that to allow for user-defined options, but generally this is it. This supports building libraries, tools, tests and running the tests with command line switches. The whole thing pretty much mimics a linux make process but using MS tools. > I don't think it makes sense to try to maintain a parallel "windows- > compatible" version of the scripts, if there are larger changes > required there. I was targeting for no changes at all. There maybe a few very small patches that supposed not to break compatibility. > Anyway you depend on cygwin for scripting, and the set > of people who want to run cygwin and build with MSVC is probably > limited enough that I don't think it makes sense for us to try to > support it. This is not as simple a choice as it seems at first. Alex Hung just posted a trick to build CUDA under Cygwin; before he did I thought it was not possible. MKL is another story. I would rather build under native Windows than try to pull this monster into the Cygwin build environment. -kkm |
From: Daniel P. <dp...@gm...> - 2015-05-07 18:09:21
|
To the extent that your patches are simple fixes and minor changes to the code, I think we could apply them. Perhaps you could work with Yenda to get your changes checked? If you update your git repo to point to the current code, then a patch should be applicable by the program "patch" to the svn repo too. Regarding your build approach, you could send me and Yenda some details about how you did it, and we could consider whether to support that. I don't think it makes sense to try to maintain a parallel "windows-compatible" version of the scripts, if there are larger changes required there. Anyway you depend on cygwin for scripting, and the set of people who want to run cygwin and build with MSVC is probably limited enough that I don't think it makes sense for us to try to support it. Dan On Thu, May 7, 2015 at 10:52 AM, Kirill Katsnelson <kir...@sm...> wrote: > I actually have kaldi more or less fully working under Windows. It compiles with either VS 2013 or ICL, MKL and CUDA. I completely redid the build system, so it does not depend on VS being used and loading 600+ projects. > > I did not get a noticeable performance gain from ICL vs MSC, and ICL in optimizing mode compiles the code up to 5 times longer than the MSC. I think the reason is that kaldi uses a matrix library for all sizable work. > > The code needs certain patches and Cygwin runtime configuration tricks to get it working. The main problems are 1) pathnames, as shell scripts pass unix-style paths to kaldi programs and 2) binary files on stdin/out, which get promptly corrupted by LF -> CR LF expansion on output unless configured. Most kaldi *bin/ programs already apply the setting, but OpenFST tools do not. > > However, I ran both librispeech and tedlium sets through to completion. It was a painful but a rewarding exercise in understanding the structure and guts of the toolkit. > > I tried both Paul's 1.3.x (do not remember the value of x off the top of my head) and 1.4.1 ports of OpenFST. The part required by Kaldi works in both. I stayed conservative with 1.3 in my tests though. Both versions required a couple of changes missed by the original patcher. > > I have all changes in github, and I was holding my patches until the repo is migrated, so that they are easy to peer-review. If there is an interest in running it before it's integrated, let me know. I need to both update the branch to the latest code and write a configuration guide. > > -kkm > >> -----Original Message----- >> From: Jan Trmal [mailto:jt...@gm...] >> Sent: 2015-05-06 1415 >> To: kal...@li...; kaldi- >> us...@li... >> Subject: [Kaldi-users] Kaldi compiles now under VS 2013 >> >> All, just to have it here in case someone actually search the list >> before asking: >> I was able to compile Kaldi under VS 2013 and OpenFST(Win)1.3.4 (which >> I forked from Paul Dixon's 1.3.3, did some more fixes and upgraded to >> 1.3.4 -- but the kudos should go to Paul). The changes were committed >> to the repository, so it should be available to everyone. >> I 'm planning to figure out if it's possible to add support for VS2015 >> and potentially OpenFST 1.4.x OpenFST 1.4.x needs C++11, which, to my >> knowledge, is partial at best in VS2015, so we will see... >> >> BTW: I'm not sure if anyone will be able to maintain/fix the codes to >> be "compilable" under MS VS. >> >> In case of troubles (and if you really really need Kaldi in VS), use >> the Intel C++ compiler. It seems that it adheres to the standard >> C++/C++11 >> -- (which is not the case for VS) >> >> y. > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users |
From: Kirill K. <kir...@sm...> - 2015-05-07 17:53:10
|
I actually have kaldi more or less fully working under Windows. It compiles with either VS 2013 or ICL, MKL and CUDA. I completely redid the build system, so it does not depend on VS being used and loading 600+ projects. I did not get a noticeable performance gain from ICL vs MSC, and ICL in optimizing mode compiles the code up to 5 times longer than the MSC. I think the reason is that kaldi uses a matrix library for all sizable work. The code needs certain patches and Cygwin runtime configuration tricks to get it working. The main problems are 1) pathnames, as shell scripts pass unix-style paths to kaldi programs and 2) binary files on stdin/out, which get promptly corrupted by LF -> CR LF expansion on output unless configured. Most kaldi *bin/ programs already apply the setting, but OpenFST tools do not. However, I ran both librispeech and tedlium sets through to completion. It was a painful but a rewarding exercise in understanding the structure and guts of the toolkit. I tried both Paul's 1.3.x (do not remember the value of x off the top of my head) and 1.4.1 ports of OpenFST. The part required by Kaldi works in both. I stayed conservative with 1.3 in my tests though. Both versions required a couple of changes missed by the original patcher. I have all changes in github, and I was holding my patches until the repo is migrated, so that they are easy to peer-review. If there is an interest in running it before it's integrated, let me know. I need to both update the branch to the latest code and write a configuration guide. -kkm > -----Original Message----- > From: Jan Trmal [mailto:jt...@gm...] > Sent: 2015-05-06 1415 > To: kal...@li...; kaldi- > us...@li... > Subject: [Kaldi-users] Kaldi compiles now under VS 2013 > > All, just to have it here in case someone actually search the list > before asking: > I was able to compile Kaldi under VS 2013 and OpenFST(Win)1.3.4 (which > I forked from Paul Dixon's 1.3.3, did some more fixes and upgraded to > 1.3.4 -- but the kudos should go to Paul). The changes were committed > to the repository, so it should be available to everyone. > I 'm planning to figure out if it's possible to add support for VS2015 > and potentially OpenFST 1.4.x OpenFST 1.4.x needs C++11, which, to my > knowledge, is partial at best in VS2015, so we will see... > > BTW: I'm not sure if anyone will be able to maintain/fix the codes to > be "compilable" under MS VS. > > In case of troubles (and if you really really need Kaldi in VS), use > the Intel C++ compiler. It seems that it adheres to the standard > C++/C++11 > -- (which is not the case for VS) > > y. |
From: Daniel P. <dp...@gm...> - 2015-05-07 02:15:24
|
Thanks. That does look slightly painful. If you are familiar with Makefiles, you could try to generate a Cygwin version of the Makefile that we use in the cudamatrix directory, and we could have the configure script replace the usual version with that version if compiled on Cygwin. Dan On Wed, May 6, 2015 at 6:04 PM, 洪孝宗 <ale...@st...> wrote: > Sorry, my english is poor. :$ > I take following steps to compile kaldi under cygwin/gcc. > 1) add __declspec(dllexport/import) to cu-{rand}kernels.cu and related .h > files > 2) use nvcc --share to generate .dll and .lib. > 3) .dll => .a > reimp -d cu-kernels.lib > reimp -d cu-randkernels.lib > dlltool -d cu-kernels.def -D cu-kernels.dll -k -l libcu_kernels.a > dlltool -d cu-randkernels.def -D cu-randkernels.dll -k -l > libcu_randkernels.a > 4) compile Kaldi with gcc > I think it will become very easy to maintain. > > Alex Hung > > 2015年5月7日 上午5:19於 "Daniel Povey" <dp...@gm...>寫道: >> >> Apparently in Windows 10 they are going to support llvm as an option >> for compilation in Visual Studio. So that will fix some things. But >> who knows if they will break other things. >> >> Dan >> >> >> On Wed, May 6, 2015 at 2:15 PM, Jan Trmal <jt...@gm...> wrote: >> > All, just to have it here in case someone actually search the list >> > before >> > asking: >> > I was able to compile Kaldi under VS 2013 and OpenFST(Win)1.3.4 (which I >> > forked from Paul Dixon's 1.3.3, did some more fixes and upgraded to >> > 1.3.4 -- >> > but the kudos should go to Paul). The changes were committed to the >> > repository, so it should be available to everyone. >> > I 'm planning to figure out if it's possible to add support for VS2015 >> > and >> > potentially OpenFST 1.4.x >> > OpenFST 1.4.x needs C++11, which, to my knowledge, is partial at best in >> > VS2015, so we will see... >> > >> > BTW: I'm not sure if anyone will be able to maintain/fix the codes to be >> > "compilable" under MS VS. >> > In case of troubles (and if you really really need Kaldi in VS), use the >> > Intel C++ compiler. It seems that it adheres to the standard C++/C++11 >> > -- >> > (which is not the case for VS) >> > >> > y. >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > One dashboard for servers and applications across Physical-Virtual-Cloud >> > Widest out-of-the-box monitoring support with 50+ applications >> > Performance metrics, stats and reports that give you Actionable Insights >> > Deep dive visibility with transaction tracing using APM Insight. >> > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> > _______________________________________________ >> > Kaldi-developers mailing list >> > Kal...@li... >> > https://lists.sourceforge.net/lists/listinfo/kaldi-developers >> > >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Kaldi-developers mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-developers |
From: 洪孝宗 <ale...@st...> - 2015-05-07 01:05:11
|
Sorry, my english is poor. :$ I take following steps to compile kaldi under cygwin/gcc. 1) add *__declspec(dllexport/import)* to cu-{rand}kernels.cu and related .h files 2) use *nvcc --share* to generate .dll and .lib. 3) .dll => .a reimp -d cu-kernels.lib reimp -d cu-randkernels.lib dlltool -d cu-kernels.def -D cu-kernels.dll -k -l libcu_kernels.a dlltool -d cu-randkernels.def -D cu-randkernels.dll -k -l libcu_randkernels.a 4) compile Kaldi with gcc I think it will become very easy to maintain. Alex Hung 2015年5月7日 上午5:19於 "Daniel Povey" <dp...@gm...>寫道: > Apparently in Windows 10 they are going to support llvm as an option > for compilation in Visual Studio. So that will fix some things. But > who knows if they will break other things. > > Dan > > > On Wed, May 6, 2015 at 2:15 PM, Jan Trmal <jt...@gm...> wrote: > > All, just to have it here in case someone actually search the list before > > asking: > > I was able to compile Kaldi under VS 2013 and OpenFST(Win)1.3.4 (which I > > forked from Paul Dixon's 1.3.3, did some more fixes and upgraded to > 1.3.4 -- > > but the kudos should go to Paul). The changes were committed to the > > repository, so it should be available to everyone. > > I 'm planning to figure out if it's possible to add support for VS2015 > and > > potentially OpenFST 1.4.x > > OpenFST 1.4.x needs C++11, which, to my knowledge, is partial at best in > > VS2015, so we will see... > > > > BTW: I'm not sure if anyone will be able to maintain/fix the codes to be > > "compilable" under MS VS. > > In case of troubles (and if you really really need Kaldi in VS), use the > > Intel C++ compiler. It seems that it adheres to the standard C++/C++11 -- > > (which is not the case for VS) > > > > y. > > > > > > > ------------------------------------------------------------------------------ > > One dashboard for servers and applications across Physical-Virtual-Cloud > > Widest out-of-the-box monitoring support with 50+ applications > > Performance metrics, stats and reports that give you Actionable Insights > > Deep dive visibility with transaction tracing using APM Insight. > > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > _______________________________________________ > > Kaldi-developers mailing list > > Kal...@li... > > https://lists.sourceforge.net/lists/listinfo/kaldi-developers > > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-developers mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-developers > |
From: Daniel P. <dp...@gm...> - 2015-05-06 21:18:56
|
Apparently in Windows 10 they are going to support llvm as an option for compilation in Visual Studio. So that will fix some things. But who knows if they will break other things. Dan On Wed, May 6, 2015 at 2:15 PM, Jan Trmal <jt...@gm...> wrote: > All, just to have it here in case someone actually search the list before > asking: > I was able to compile Kaldi under VS 2013 and OpenFST(Win)1.3.4 (which I > forked from Paul Dixon's 1.3.3, did some more fixes and upgraded to 1.3.4 -- > but the kudos should go to Paul). The changes were committed to the > repository, so it should be available to everyone. > I 'm planning to figure out if it's possible to add support for VS2015 and > potentially OpenFST 1.4.x > OpenFST 1.4.x needs C++11, which, to my knowledge, is partial at best in > VS2015, so we will see... > > BTW: I'm not sure if anyone will be able to maintain/fix the codes to be > "compilable" under MS VS. > In case of troubles (and if you really really need Kaldi in VS), use the > Intel C++ compiler. It seems that it adheres to the standard C++/C++11 -- > (which is not the case for VS) > > y. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-developers mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-developers > |
From: Jan T. <jt...@gm...> - 2015-05-06 21:15:15
|
All, just to have it here in case someone actually search the list before asking: I was able to compile Kaldi under VS 2013 and OpenFST(Win)1.3.4 (which I forked from Paul Dixon's 1.3.3, did some more fixes and upgraded to 1.3.4 -- but the kudos should go to Paul). The changes were committed to the repository, so it should be available to everyone. I 'm planning to figure out if it's possible to add support for VS2015 and potentially OpenFST 1.4.x OpenFST 1.4.x needs C++11, which, to my knowledge, is partial at best in VS2015, so we will see... BTW: I'm not sure if anyone will be able to maintain/fix the codes to be "compilable" under MS VS. In case of troubles (and if you really really need Kaldi in VS), use the Intel C++ compiler. It seems that it adheres to the standard C++/C++11 -- (which is not the case for VS) y. |
From: Daniel P. <dp...@gm...> - 2015-05-06 18:43:50
|
Interesting. In practice you would definitely want to do some kind of pruning. I believe that decodable object does cache the likelihoods. I'm not sure that we'd want to check in this code unless it is demonstrated to be useful. My understanding is that forward-backward does not really have any advantage over Viterbi. Of course if you can demonstrate that it is somehow better, it would be a different story. Dan On Wed, May 6, 2015 at 10:01 AM, Joan Puigcerver <joa...@gm...> wrote: > Hi, > > I implemented a simple version of the forward and backward algorithms, > in order to support Baum-Welch training in Kaldi. There is still > plenty of work to do, mostly in terms of speed, but I achieved to > replicate HTK's F/B results (average likelihood/frame and HMM-states > occupancy stats) with both toy examples and real handwritten samples. > > I apologize for the length of this email, but I wanted to explain the > details of my implementation and ideas that I have to improve it. > > 1. High level description > ========================= > > a) Compile training graphs, as you would do with the Viterbi training: > > $ compile-train-graphs --self-loop-scale=0 --transition-scale=0 tree > 0.mdl lex.fst ark:train.transc.ark ark:train.fsts.ark > > b) Get the transition-id posteriors, for your training data: > > $ gmm-fb-compiled --self-loop-scale=1 --transition-scale=1 0.mdl > ark:train.fsts.ark ark:train.features.ark ark:0.posteriors.ark > > Source code: > https://github.com/jpuigcerver/kaldi/tree/em/trunk/src/gmmbin/gmm-fb-compiled.cc > > This program is similar to gmm-align-compiled, the idea is that > instead of obtaining a list of transition-ids for each training > sample, you get a list of posteriors over the transition-ids for each > training sample. This is the only thing that you need to do the MLE > reestimation. > > c) Accumulate statistics from the posteriors, using gmm-acc-stats, as usual: > > gmm-acc-stats 0.mdl ark:train.features.ark ark:0.posteriors.ark 0.acc > > d) MLE re-estimation: > > gmm-est 0.mdl 0.acc 1.mdl > > > 2. Implementation details > ========================= > > gmm-fb-compiled uses the Forward/Backward algorithm to compute the > transition-id posteriors. The Forward/Backward algorithms are > implemented, for now, in two separate classes that can be found in: > > https://github.com/jpuigcerver/kaldi/tree/em/trunk/src/fb > > Both SimpleForward and SimpleBackward classes are very similar to > SimpleDecoder, so it should not be difficult to read them: > > Basically, I keep a list of unordered_maps (one for each timestep) of > active Tokens, similar to the used in the Viterbi algorithm. Instead > of storing the cost of the path with the minimum cost to the given > state, I store the log-sum-exp of all paths to the given state. > > The most significant difference is in the ProcessNonemitting() method > of both classes: Viterbi algorithm uses the Tropical semiring, which > is idempotent with respect to the product operation (min), while > Foward/Backward use the Log semiring, which is not idempotent respect > it (log-sum-exp operation). This makes that some simplifications that > work for Viterbi's algorithm, do not apply for Forward/Backward's. I > implemented the same algorithm that OpenFST's fstshortestdistance > uses, in order to ensure the correct computation of the > Forward/Backward scores, even when epsilon transitions or loops are > present in the input WFST. See [1] for more details about the > algorithm. > > 3. Experiments > ============== > > I did some experiments to check my implementation: > > With some toy examples I used OpenFST's fstshortestdistance tool to do > the forward/backward pass. These toy examples include some > not-so-trivial WFST with epsilon transitions and even epsilon-loops > and bucles (cycles). These kind of WFST would not be possible with > HTK. > > I also compared the occupancy statistics obtained by HTK with the ones > obtained by gmm-acc-stats, after the first iteration of my Baum-Welch > recipe, using real handwritten text recognition data. > > I implemented several toy tests that can be run with > fb/simple-forward-test, fb/simple-backward-test and > fb/simple-posteriors-test. > > You can find the files required to run the comparison with HTK in: > https://www.prhlt.upv.es/~jpuigcerver/kaldi_bentham.tar.gz > https://www.prhlt.upv.es/~jpuigcerver/htk_bentham.tar.gz > > The log-likelihood of each training sample and the average/frame is in > this file: > https://www.prhlt.upv.es/~jpuigcerver/kaldi_vs_htk_likelihood_comparison.txt > > And the summary of the HMM-state occupancy of both systems is here: > https://www.prhlt.upv.es/~jpuigcerver/kaldi_hmmstate_occupancy.txt > https://www.prhlt.upv.es/~jpuigcerver/htk_hmmstate_occupancy.txt > > As you can see, both results are very similar. I assume the diferences > are due to implementation details. > > 4. Improvement ideas > ==================== > > a) Current implementation is about 20-30 times slower than HTK's. > Possible reasons: > > a.1) The backward pass is most likely the bottleneck, plus beam > pruning would add no-benefit: The problem is OpenFST's traversal of > the arcs. There is no efficient way to traverse the incoming arcs, > given a particular node (I mean in O(k), where k is the number of > outgoing arcs of a particular state): ArcIterator only allows to > traverse the outgoing arcs, given a node. This is a problem in the > backward pass, since I need to traverse basically all arcs in the WFST > to detect those which enter a particular active state in the backward > pass (see source ProcessEmitting/ProcessNonemitting code in > SimpleBackward.cc). > > A possible solution would be to work with the transpose of the WFST in > the backward pass, similar to what [2] do. Unfortunately, this would > increase the memory cost. > > Another option, which would not increase the memory cost so > dramatically, would be to implement a custom WFST class that has both > input/output arcs information in each node, but this would certainly > require more programming efforts. > > a.2) HTK always does a beam pruning in the forward pass: First it does > the backward pass. Then, once it is doing the forward pass, at a given > time t, and state s, one can check whether beta(t, s) == 0. If so, > even if alpha(t, s) > 0, there is no point in keeping this token > alive, since we know that this token will not lead to any final state. > (Does anyone know why HTK does first the backward pass? AFAIK, the > same pruning could be done if the forward pass was done first: one > would just prune the backward pass instead) > > a.3) This is more a question than a observation: Does the > DecodableAmDiagGmm cache the GMM log-likelihood computations? If not, > this would probably also improve the running time of the F/B passes. > > b) Memory usage can also be improved: It consumes 3-4 times more > memory than HTK. This is not so critical as the speed, but certainly > would be appreciated. Right now, I keeping two copies of the trellis > in RAM: one for the forward pass, and one for the backward pass. > > I think only a copy is needed (say, for the forward pass), plus a few > extra rows of the trellis (say for t, and t - 1). If the forward > scores are available, one can simply compute alpha(s, t) * beta(s, t) > on the fly during the backward pass, to compute the state posteriors, > or alpha(s, t) * beta(q, t+1) to compute the edge posteriors. > > I would like to hear comments and suggestions, regarding the current > implementation and, especially, the ideas that I have to improve the > speed/memory. > > I'd be very glad to hear comments from Kaldi's main developers, which > probably have much more experience implementing Baum-Welch training, > than I do, and know many implementation tricks. Also, since I aspire > to include my code to the Kaldi main branch, at some point, it should > comply with Kaldi's code standards. > > Thank you. > > [1] Mohri, Mehryar. "Semiring frameworks and algorithms for > shortest-distance problems." Journal of Automata, Languages and > Combinatorics 7.3 (2002): 321-350. > [2] Hannemann, Mirko, Daniel Povey, and Geoffrey Zweig. "Combining > forward and backward search in decoding." Acoustics, Speech and Signal > Processing (ICASSP), 2013 IEEE International Conference on. IEEE, > 2013. > > 2015-04-01 23:30 GMT+02:00 Daniel Povey <dp...@gm...>: >>> However, I am not so sure how to implement the Backward algorithm, since I >>> must traverse the edges in the FST backwards (to do the backward pass in O(T >>> * (V + E))), and OpenFST does not support this AFAIK. Also, I am not sure if >>> simply transposing the FST would work, since I would have many initial >>> states... Any suggestion on that? >> >> It should be possible to handle this by arranging your code in the right >> way, e.g. first iterate over the start-state of the arc. >>> >>> > This would create a difficulty for converting alignments though (this >>> > happens >>> > when bootstrapping later systems, e.g. starting tri2 from tri1). You >>> > would >>> > probably have to just to Viterbi for that one stage. >>> >>> I am not sure what you mean. Could you extend your explanation or point to >>> a recipe where you had to overcome this difficulty? >> >> I mean the program convert-ali wouldn't work unless you had alignments. >> Dan >> >>> >>> Many thanks for your help and advices, >>> >>> Joan Puigcerver. >> >> |
From: Joan P. <joa...@gm...> - 2015-05-06 17:01:51
|
Hi, I implemented a simple version of the forward and backward algorithms, in order to support Baum-Welch training in Kaldi. There is still plenty of work to do, mostly in terms of speed, but I achieved to replicate HTK's F/B results (average likelihood/frame and HMM-states occupancy stats) with both toy examples and real handwritten samples. I apologize for the length of this email, but I wanted to explain the details of my implementation and ideas that I have to improve it. 1. High level description ========================= a) Compile training graphs, as you would do with the Viterbi training: $ compile-train-graphs --self-loop-scale=0 --transition-scale=0 tree 0.mdl lex.fst ark:train.transc.ark ark:train.fsts.ark b) Get the transition-id posteriors, for your training data: $ gmm-fb-compiled --self-loop-scale=1 --transition-scale=1 0.mdl ark:train.fsts.ark ark:train.features.ark ark:0.posteriors.ark Source code: https://github.com/jpuigcerver/kaldi/tree/em/trunk/src/gmmbin/gmm-fb-compiled.cc This program is similar to gmm-align-compiled, the idea is that instead of obtaining a list of transition-ids for each training sample, you get a list of posteriors over the transition-ids for each training sample. This is the only thing that you need to do the MLE reestimation. c) Accumulate statistics from the posteriors, using gmm-acc-stats, as usual: gmm-acc-stats 0.mdl ark:train.features.ark ark:0.posteriors.ark 0.acc d) MLE re-estimation: gmm-est 0.mdl 0.acc 1.mdl 2. Implementation details ========================= gmm-fb-compiled uses the Forward/Backward algorithm to compute the transition-id posteriors. The Forward/Backward algorithms are implemented, for now, in two separate classes that can be found in: https://github.com/jpuigcerver/kaldi/tree/em/trunk/src/fb Both SimpleForward and SimpleBackward classes are very similar to SimpleDecoder, so it should not be difficult to read them: Basically, I keep a list of unordered_maps (one for each timestep) of active Tokens, similar to the used in the Viterbi algorithm. Instead of storing the cost of the path with the minimum cost to the given state, I store the log-sum-exp of all paths to the given state. The most significant difference is in the ProcessNonemitting() method of both classes: Viterbi algorithm uses the Tropical semiring, which is idempotent with respect to the product operation (min), while Foward/Backward use the Log semiring, which is not idempotent respect it (log-sum-exp operation). This makes that some simplifications that work for Viterbi's algorithm, do not apply for Forward/Backward's. I implemented the same algorithm that OpenFST's fstshortestdistance uses, in order to ensure the correct computation of the Forward/Backward scores, even when epsilon transitions or loops are present in the input WFST. See [1] for more details about the algorithm. 3. Experiments ============== I did some experiments to check my implementation: With some toy examples I used OpenFST's fstshortestdistance tool to do the forward/backward pass. These toy examples include some not-so-trivial WFST with epsilon transitions and even epsilon-loops and bucles (cycles). These kind of WFST would not be possible with HTK. I also compared the occupancy statistics obtained by HTK with the ones obtained by gmm-acc-stats, after the first iteration of my Baum-Welch recipe, using real handwritten text recognition data. I implemented several toy tests that can be run with fb/simple-forward-test, fb/simple-backward-test and fb/simple-posteriors-test. You can find the files required to run the comparison with HTK in: https://www.prhlt.upv.es/~jpuigcerver/kaldi_bentham.tar.gz https://www.prhlt.upv.es/~jpuigcerver/htk_bentham.tar.gz The log-likelihood of each training sample and the average/frame is in this file: https://www.prhlt.upv.es/~jpuigcerver/kaldi_vs_htk_likelihood_comparison.txt And the summary of the HMM-state occupancy of both systems is here: https://www.prhlt.upv.es/~jpuigcerver/kaldi_hmmstate_occupancy.txt https://www.prhlt.upv.es/~jpuigcerver/htk_hmmstate_occupancy.txt As you can see, both results are very similar. I assume the diferences are due to implementation details. 4. Improvement ideas ==================== a) Current implementation is about 20-30 times slower than HTK's. Possible reasons: a.1) The backward pass is most likely the bottleneck, plus beam pruning would add no-benefit: The problem is OpenFST's traversal of the arcs. There is no efficient way to traverse the incoming arcs, given a particular node (I mean in O(k), where k is the number of outgoing arcs of a particular state): ArcIterator only allows to traverse the outgoing arcs, given a node. This is a problem in the backward pass, since I need to traverse basically all arcs in the WFST to detect those which enter a particular active state in the backward pass (see source ProcessEmitting/ProcessNonemitting code in SimpleBackward.cc). A possible solution would be to work with the transpose of the WFST in the backward pass, similar to what [2] do. Unfortunately, this would increase the memory cost. Another option, which would not increase the memory cost so dramatically, would be to implement a custom WFST class that has both input/output arcs information in each node, but this would certainly require more programming efforts. a.2) HTK always does a beam pruning in the forward pass: First it does the backward pass. Then, once it is doing the forward pass, at a given time t, and state s, one can check whether beta(t, s) == 0. If so, even if alpha(t, s) > 0, there is no point in keeping this token alive, since we know that this token will not lead to any final state. (Does anyone know why HTK does first the backward pass? AFAIK, the same pruning could be done if the forward pass was done first: one would just prune the backward pass instead) a.3) This is more a question than a observation: Does the DecodableAmDiagGmm cache the GMM log-likelihood computations? If not, this would probably also improve the running time of the F/B passes. b) Memory usage can also be improved: It consumes 3-4 times more memory than HTK. This is not so critical as the speed, but certainly would be appreciated. Right now, I keeping two copies of the trellis in RAM: one for the forward pass, and one for the backward pass. I think only a copy is needed (say, for the forward pass), plus a few extra rows of the trellis (say for t, and t - 1). If the forward scores are available, one can simply compute alpha(s, t) * beta(s, t) on the fly during the backward pass, to compute the state posteriors, or alpha(s, t) * beta(q, t+1) to compute the edge posteriors. I would like to hear comments and suggestions, regarding the current implementation and, especially, the ideas that I have to improve the speed/memory. I'd be very glad to hear comments from Kaldi's main developers, which probably have much more experience implementing Baum-Welch training, than I do, and know many implementation tricks. Also, since I aspire to include my code to the Kaldi main branch, at some point, it should comply with Kaldi's code standards. Thank you. [1] Mohri, Mehryar. "Semiring frameworks and algorithms for shortest-distance problems." Journal of Automata, Languages and Combinatorics 7.3 (2002): 321-350. [2] Hannemann, Mirko, Daniel Povey, and Geoffrey Zweig. "Combining forward and backward search in decoding." Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013. 2015-04-01 23:30 GMT+02:00 Daniel Povey <dp...@gm...>: >> However, I am not so sure how to implement the Backward algorithm, since I >> must traverse the edges in the FST backwards (to do the backward pass in O(T >> * (V + E))), and OpenFST does not support this AFAIK. Also, I am not sure if >> simply transposing the FST would work, since I would have many initial >> states... Any suggestion on that? > > It should be possible to handle this by arranging your code in the right > way, e.g. first iterate over the start-state of the arc. >> >> > This would create a difficulty for converting alignments though (this >> > happens >> > when bootstrapping later systems, e.g. starting tri2 from tri1). You >> > would >> > probably have to just to Viterbi for that one stage. >> >> I am not sure what you mean. Could you extend your explanation or point to >> a recipe where you had to overcome this difficulty? > > I mean the program convert-ali wouldn't work unless you had alignments. > Dan > >> >> Many thanks for your help and advices, >> >> Joan Puigcerver. > > |
From: Matthew A. <mat...@gm...> - 2015-05-06 10:53:13
|
Hi Michal Idlak - speech synthesis in Kaldi is available on the sandbox/idlak branch. Currently an alpha front end is available which transforms US English text into full context models. And aligns Blizzard data (with a script to download it). In our interspeech paper from 2015 we tested this front end by using the models and segmented data The best place to start with the Idlak synthesis system in Kaldi is to look at our interspeech paper from 2015. Here (and within the documentation for Idlak) are instructions for downloading and building the HTS demo and use the front end within this system to produce speech output. I am currently working on generating output trees and models which leaves the following: 1. Mel generalised cepstrum generation (as per SPTK). We need these so we can reverse the transformation in synthesis. We could of course use other parameterisations and vocoders. But currently we could use the hts_engine vocoder with this parameterisation for testing purposes. 2. Arnab worked on feature extraction of banded noise estimation for mixed excitation vocoding but I have not followed this up and integrated it into the voice building system. 3. Trajectory modelling. The algorithm for taking means and variances from models and producing a trajectory on a per frame basis is well described but we need to implement an Idlak version of this using the Kaldi matrix resources. 4. There is an issue in generating a vocoder. HTS has MLSA implemented but its hard to follow how this actually works (the original paper is not so detailed). However someone within the Kaldi community might have a better insight into this than me. Currently I am using HTS demo as a test harness for development. At some point this will cease to be useful and require to many compromises in the design. Very happy to get more input on this project from people and contributions to the work also. Yours Matthew On Tue, May 5, 2015 at 7:38 PM, Daniel Povey <dp...@gm...> wrote: > Matthew Aylett (cc'd) is working on a speech synthesis project called > "idlak", which is part of the Kaldi repository. I believe he recently > got it to the point where it produces output. Matthew, perhaps you > can comment, and show him where to look? > Dan > > > On Tue, May 5, 2015 at 2:45 AM, Michal Klíma <mic...@gm...> > wrote: > > Hello, > > My name is Michal Klíma and I'm student from Czech Republic (University > of > > West Bohemia). I'm working on my master's thesis and I want to try your > > system Kaldi. I have one important question. Are here any possibilities > to > > do speech synthesis with Kaldi? I have tried to find something about it, > but > > I haven't been succesfull yet. I know that Kaldi is very similar to > another > > speech regogniser toolkit named HTK and I know there is possibble to do > > speech synthesis. > > Yours sincerely > > Michal Klíma > > > > > ------------------------------------------------------------------------------ > > One dashboard for servers and applications across Physical-Virtual-Cloud > > Widest out-of-the-box monitoring support with 50+ applications > > Performance metrics, stats and reports that give you Actionable Insights > > Deep dive visibility with transaction tracing using APM Insight. > > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > _______________________________________________ > > Kaldi-developers mailing list > > Kal...@li... > > https://lists.sourceforge.net/lists/listinfo/kaldi-developers > > > |
From: Tony R. <to...@sp...> - 2015-05-06 07:13:43
|
"its not a bug, it's feature" of CUDA 6 called unified memory - http://devblogs.nvidia.com/parallelforall/unified-memory-in-cuda-6/ (the idea is automatic transfer from CPU to GPU memory - personally I don't like it). Tony On 05/05/15 21:35, Daniel Povey wrote: > The virtual memory reported by the OS is not accurate for programs > that use the GPU. I don't know whether this is a bug in Linux or in > NVidia's drivers. > Dan > > > On Tue, May 5, 2015 at 1:33 PM, Kirill Katsnelson > <kir...@sm...> wrote: >> 12797 kkm 20 0 53.0G 806M 112M R 97.5 2.5 3h44:56 nnet-train-mpe-sequential --feature-transform=exp/dnn4_pretrain-dbn_dnn_smbr/final.feature_transform --class-frame-counts=exp/dnn4_pretrain-dbn_dnn_sm >> >> nnet-train-mpe-sequential is using 53G of virtual memory, of which only 806M is resident. The memory does not seem to be backed by the swap file, because I see a zero use there. Is this some special uncommitted allocation done by CUDA? I just want to make sure there is not a leak, although the only thing leaking now seems the address space, and we have 2^64 of it anyway... >> >> -kkm >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Kaldi-developers mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-developers > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-developers mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-developers -- Dr A J Robinson, Founder We are hiring: www.speechmatics.com/careers Speechmatics is a trading name of Cantab Research Limited Phone direct: 01223 778240 office: 01223 794497 Company reg no GB 05697423, VAT reg no 925606030 51 Canterbury Street, Cambridge, CB4 3QG, UK |
From: Daniel P. <dp...@gm...> - 2015-05-05 20:35:07
|
The virtual memory reported by the OS is not accurate for programs that use the GPU. I don't know whether this is a bug in Linux or in NVidia's drivers. Dan On Tue, May 5, 2015 at 1:33 PM, Kirill Katsnelson <kir...@sm...> wrote: > 12797 kkm 20 0 53.0G 806M 112M R 97.5 2.5 3h44:56 nnet-train-mpe-sequential --feature-transform=exp/dnn4_pretrain-dbn_dnn_smbr/final.feature_transform --class-frame-counts=exp/dnn4_pretrain-dbn_dnn_sm > > nnet-train-mpe-sequential is using 53G of virtual memory, of which only 806M is resident. The memory does not seem to be backed by the swap file, because I see a zero use there. Is this some special uncommitted allocation done by CUDA? I just want to make sure there is not a leak, although the only thing leaking now seems the address space, and we have 2^64 of it anyway... > > -kkm > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-developers mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-developers |
From: Kirill K. <kir...@sm...> - 2015-05-05 20:33:41
|
12797 kkm 20 0 53.0G 806M 112M R 97.5 2.5 3h44:56 nnet-train-mpe-sequential --feature-transform=exp/dnn4_pretrain-dbn_dnn_smbr/final.feature_transform --class-frame-counts=exp/dnn4_pretrain-dbn_dnn_sm nnet-train-mpe-sequential is using 53G of virtual memory, of which only 806M is resident. The memory does not seem to be backed by the swap file, because I see a zero use there. Is this some special uncommitted allocation done by CUDA? I just want to make sure there is not a leak, although the only thing leaking now seems the address space, and we have 2^64 of it anyway... -kkm |
From: Daniel P. <dp...@gm...> - 2015-05-05 19:54:09
|
Thanks a lot! Dan On Tue, May 5, 2015 at 12:49 PM, Tony Robinson <to...@sp...> wrote: > Hi Dan, > > .rpms do have dependency information, but rpm as a command doesn't know > how to deal with unknown dependencies. So either it works or you spend > hours trying to do it manually - not fun and not maintainable so I'd > rule out using rpms directly. Yum is the interface one layer up that > deals with dependencies, I like it better than apt-get (which still gets > in a mess far too often - but I'm now back to using over a decade later). > > A quick look at son of grid engine says that it's Redhat/CentOS 5/6 not > 7. The last release date is 23-Oct-2014. I don't think this gets us > anywhere as the instructions below will install on Redhat/CentOS 5/6 > anyway (it does get a more recent release and more complex > functionality, but we don't need that). > > Instructions for Redhat/CentOS 6 are below. > > > Tony > > # To install all > > yum -y install gridengine-qmaster gridengine-execd gridengine-qmon > > # note that gridengine-qmon has broken graphics dependencies on CentOS6 > so most hosts only need: > > yum -y install gridengine-execd > > # for gridmaster only > > cd /usr/share/gridengine > ./install_qmaster > > # take defaults except: > # > # install as root (first questions) > # do not enable the JMX MBean server > # use classic database > # use email to...@ca... > # add code0,code1,... or use /home/tonyr/hosts > # no shadow hosts > > # For all other hosts > > # For a new machine, on an *existing* host: > qconf -ah code41.cantabResearch.com > qconf -as code41.cantabResearch.com > > # On the new machine > cd /usr/share/gridengine > scp -rp code6:/usr/share/gridengine/default /usr/share/gridengine > rm -f /usr/share/gridengine/default/common/accounting > ./install_execd > # and say yes to everything (CR) > > # and add everyone as a administrator > > qconf -am tonyr,**DELETED** > > You wouldn't believe how many years it took to get the instructions down > to this level of simplicity... > > On 05/05/15 20:24, Daniel Povey wrote: >> It would be nice to know at least the outline of how you installed it >> on CentOS 6. >> >> I notice that the son of grid engine project here >> http://arc.liv.ac.uk/downloads/SGE/releases/8.1.8/ >> seems to have both .rpm and .deb packages. I assume this means one >> could get the packages from there and at least try to install them on >> RedHat-type systems? I don't know if the .rpm packages contain the >> dependency information? >> Dan >> >> >> On Tue, May 5, 2015 at 12:05 PM, Tony Robinson <to...@sp...> wrote: >>> CentOS 6 works fine and I can provide exact step-by-step instructions to >>> make it work if that helps. >>> >>> gridEngine isn't supported in CentOS 7 (or not when I looked) - so we >>> moved away from CentOS. >>> >>> >>> Tony >>> >>> On 05/05/15 19:45, Daniel Povey wrote: >>>> Hi, >>>> Does anyone on this list have any advice or experience for installing >>>> some kind of GridEngine on Red Hat linux or related distributions? >>>> Generally speaking I advise people to use Debian if they want >>>> GridEngine because it has packages that work. But I'm wondering if >>>> there is a solution available for Red Hat or Red Hat-derived >>>> distributions. >>>> Dan >>>> >>>> ------------------------------------------------------------------------------ >>>> One dashboard for servers and applications across Physical-Virtual-Cloud >>>> Widest out-of-the-box monitoring support with 50+ applications >>>> Performance metrics, stats and reports that give you Actionable Insights >>>> Deep dive visibility with transaction tracing using APM Insight. >>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>>> _______________________________________________ >>>> Kaldi-developers mailing list >>>> Kal...@li... >>>> https://lists.sourceforge.net/lists/listinfo/kaldi-developers >>> >>> -- >>> Dr A J Robinson, Founder >>> We are hiring: www.speechmatics.com/careers >>> Speechmatics is a trading name of Cantab Research Limited >>> Phone direct: 01223 778240 office: 01223 794497 >>> Company reg no GB 05697423, VAT reg no 925606030 >>> 51 Canterbury Street, Cambridge, CB4 3QG, UK >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Kaldi-developers mailing list >>> Kal...@li... >>> https://lists.sourceforge.net/lists/listinfo/kaldi-developers > > > -- > Dr A J Robinson, Founder > We are hiring: www.speechmatics.com/careers > Speechmatics is a trading name of Cantab Research Limited > Phone direct: 01223 778240 office: 01223 794497 > Company reg no GB 05697423, VAT reg no 925606030 > 51 Canterbury Street, Cambridge, CB4 3QG, UK > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-developers mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-developers |
From: Tony R. <to...@sp...> - 2015-05-05 19:49:59
|
Hi Dan, .rpms do have dependency information, but rpm as a command doesn't know how to deal with unknown dependencies. So either it works or you spend hours trying to do it manually - not fun and not maintainable so I'd rule out using rpms directly. Yum is the interface one layer up that deals with dependencies, I like it better than apt-get (which still gets in a mess far too often - but I'm now back to using over a decade later). A quick look at son of grid engine says that it's Redhat/CentOS 5/6 not 7. The last release date is 23-Oct-2014. I don't think this gets us anywhere as the instructions below will install on Redhat/CentOS 5/6 anyway (it does get a more recent release and more complex functionality, but we don't need that). Instructions for Redhat/CentOS 6 are below. Tony # To install all yum -y install gridengine-qmaster gridengine-execd gridengine-qmon # note that gridengine-qmon has broken graphics dependencies on CentOS6 so most hosts only need: yum -y install gridengine-execd # for gridmaster only cd /usr/share/gridengine ./install_qmaster # take defaults except: # # install as root (first questions) # do not enable the JMX MBean server # use classic database # use email to...@ca... # add code0,code1,... or use /home/tonyr/hosts # no shadow hosts # For all other hosts # For a new machine, on an *existing* host: qconf -ah code41.cantabResearch.com qconf -as code41.cantabResearch.com # On the new machine cd /usr/share/gridengine scp -rp code6:/usr/share/gridengine/default /usr/share/gridengine rm -f /usr/share/gridengine/default/common/accounting ./install_execd # and say yes to everything (CR) # and add everyone as a administrator qconf -am tonyr,**DELETED** You wouldn't believe how many years it took to get the instructions down to this level of simplicity... On 05/05/15 20:24, Daniel Povey wrote: > It would be nice to know at least the outline of how you installed it > on CentOS 6. > > I notice that the son of grid engine project here > http://arc.liv.ac.uk/downloads/SGE/releases/8.1.8/ > seems to have both .rpm and .deb packages. I assume this means one > could get the packages from there and at least try to install them on > RedHat-type systems? I don't know if the .rpm packages contain the > dependency information? > Dan > > > On Tue, May 5, 2015 at 12:05 PM, Tony Robinson <to...@sp...> wrote: >> CentOS 6 works fine and I can provide exact step-by-step instructions to >> make it work if that helps. >> >> gridEngine isn't supported in CentOS 7 (or not when I looked) - so we >> moved away from CentOS. >> >> >> Tony >> >> On 05/05/15 19:45, Daniel Povey wrote: >>> Hi, >>> Does anyone on this list have any advice or experience for installing >>> some kind of GridEngine on Red Hat linux or related distributions? >>> Generally speaking I advise people to use Debian if they want >>> GridEngine because it has packages that work. But I'm wondering if >>> there is a solution available for Red Hat or Red Hat-derived >>> distributions. >>> Dan >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Kaldi-developers mailing list >>> Kal...@li... >>> https://lists.sourceforge.net/lists/listinfo/kaldi-developers >> >> -- >> Dr A J Robinson, Founder >> We are hiring: www.speechmatics.com/careers >> Speechmatics is a trading name of Cantab Research Limited >> Phone direct: 01223 778240 office: 01223 794497 >> Company reg no GB 05697423, VAT reg no 925606030 >> 51 Canterbury Street, Cambridge, CB4 3QG, UK >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Kaldi-developers mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-developers -- Dr A J Robinson, Founder We are hiring: www.speechmatics.com/careers Speechmatics is a trading name of Cantab Research Limited Phone direct: 01223 778240 office: 01223 794497 Company reg no GB 05697423, VAT reg no 925606030 51 Canterbury Street, Cambridge, CB4 3QG, UK |
From: Daniel P. <dp...@gm...> - 2015-05-05 19:25:02
|
It would be nice to know at least the outline of how you installed it on CentOS 6. I notice that the son of grid engine project here http://arc.liv.ac.uk/downloads/SGE/releases/8.1.8/ seems to have both .rpm and .deb packages. I assume this means one could get the packages from there and at least try to install them on RedHat-type systems? I don't know if the .rpm packages contain the dependency information? Dan On Tue, May 5, 2015 at 12:05 PM, Tony Robinson <to...@sp...> wrote: > CentOS 6 works fine and I can provide exact step-by-step instructions to > make it work if that helps. > > gridEngine isn't supported in CentOS 7 (or not when I looked) - so we > moved away from CentOS. > > > Tony > > On 05/05/15 19:45, Daniel Povey wrote: >> Hi, >> Does anyone on this list have any advice or experience for installing >> some kind of GridEngine on Red Hat linux or related distributions? >> Generally speaking I advise people to use Debian if they want >> GridEngine because it has packages that work. But I'm wondering if >> there is a solution available for Red Hat or Red Hat-derived >> distributions. >> Dan >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Kaldi-developers mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-developers > > > -- > Dr A J Robinson, Founder > We are hiring: www.speechmatics.com/careers > Speechmatics is a trading name of Cantab Research Limited > Phone direct: 01223 778240 office: 01223 794497 > Company reg no GB 05697423, VAT reg no 925606030 > 51 Canterbury Street, Cambridge, CB4 3QG, UK > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-developers mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-developers |
From: Tony R. <to...@sp...> - 2015-05-05 19:18:59
|
CentOS 6 works fine and I can provide exact step-by-step instructions to make it work if that helps. gridEngine isn't supported in CentOS 7 (or not when I looked) - so we moved away from CentOS. Tony On 05/05/15 19:45, Daniel Povey wrote: > Hi, > Does anyone on this list have any advice or experience for installing > some kind of GridEngine on Red Hat linux or related distributions? > Generally speaking I advise people to use Debian if they want > GridEngine because it has packages that work. But I'm wondering if > there is a solution available for Red Hat or Red Hat-derived > distributions. > Dan > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Kaldi-developers mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-developers -- Dr A J Robinson, Founder We are hiring: www.speechmatics.com/careers Speechmatics is a trading name of Cantab Research Limited Phone direct: 01223 778240 office: 01223 794497 Company reg no GB 05697423, VAT reg no 925606030 51 Canterbury Street, Cambridge, CB4 3QG, UK |