torch5-devel Mailing List for Torch5: fast machine learning toolbox
Status: Pre-Alpha
Brought to you by:
andresy
You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(2) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(1) |
Jul
|
Aug
(3) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Clément F. <cle...@gm...> - 2012-02-13 05:09:10
|
LinkedIn ------------ I would like to connect with you on LinkedIn. Powered by Rainmaker - Get your free account at http://rainmaker.cc/ Clément Farabet Research Scientist at New York University Greater New York City Area Confirm that you know Clément Farabet: https://www.linkedin.com/e/cn2tuy-gyl1mi5g-4f/isd/5896331344/rQSlaaP0/?hs=false&tok=3lqZPOInCl0B81 -- You are receiving Invitation to Connect emails. Click to unsubscribe: http://www.linkedin.com/e/cn2tuy-gyl1mi5g-4f/hSAhDk1mDGRjO9Dtteq7sk3F8TBWBfYCjxAGvhJomTPWdc3C5Lt/goo/torch5-devel%40lists%2Esourceforge%2Enet/20061/I2050415880_1/?hs=false&tok=3TR4CUbLOl0B81 (c) 2012 LinkedIn Corporation. 2029 Stierlin Ct, Mountain View, CA 94043, USA. |
From: Clément F. <cle...@gm...> - 2010-07-12 16:23:15
|
Hi all, I also noticed some severe memory leaks (lack of garbage collection) on tensor allocation. Try to run: > for i = 1,1000000 do a = torch.Tensor(500,500) end (you can stop it before it consumes all your mem...) at the end of this for loop, there's only one valid reference to a tensor, but nothing ever gets garbage collected. Even a manual call to the garbage collector doesn't clean that up. Maybe I'm mistaken, and that kind of data cannot be handled by the garbage collector ? If it is so the work around is to always pre-allocate tensors, and only resize/fill them with new data in loops, as you would do it in C... but that sort of defeats the purpose of a scripting language right ? Other (less important) leak, the function qt.QImage.fromTensor(tensor) returns a freshly allocated QImage, that doesn't seem to be garbage collectible. That basically makes QT displays have a limited lifetime :-(, as any tensor needs to be converted like that first. I've written an other function that fills a QImage passed as an argument, to avoid this leak, but it would be better if the QImage was simply collected at some point. Clement. |
From: JIA P. <jp...@gm...> - 2010-06-14 05:19:17
|
Hi, all: Since all the 3rd party libraries can be grabbed from Ubuntu repository directly, I'm wondering how can I compile current torch5-SVN after removing the subdirectory "3rdparty" ? The reason why I'd love to do so is just because some 3rdparty libraries are a bit old and I seriously don't want to install two version of 3rdparty libraries in one computer. For example: 3rdparty folder Ubuntu repository lua-5.1.3 lua-5.1.4 cairo-1.6.4 cairo-1.8.10 etc... Can anybody help??? By the way, is torch5 still alive??? I'm currently using CMake and after removing the entire "3rdparty" subfolder, I obtained the following error messages: CMake Error at scripts/TorchPackage.cmake:29 (FIND_PACKAGE): Could not find module FindLua.cmake or a configuration file for package Lua. Adjust CMAKE_MODULE_PATH to find FindLua.cmake or set Lua_DIR to the directory containing a CMake configuration file for Lua. The file will have one of the following names: LuaConfig.cmake lua-config.cmake Call Stack (most recent call first): CMakeLists.txt:52 (INCLUDE) CMake Error at libraries/luaT/CMakeLists.txt:3 (FIND_PACKAGE): Could not find module FindLua.cmake or a configuration file for package Lua. Adjust CMAKE_MODULE_PATH to find FindLua.cmake or set Lua_DIR to the directory containing a CMake configuration file for Lua. The file will have one of the following names: LuaConfig.cmake lua-config.cmake CMake Error at libraries/TH/CMakeLists.txt:13 (CHECK_FUNCTION_EXISTS): Unknown CMake command "CHECK_FUNCTION_EXISTS". -- Welcome to Vision Open http://www.visionopen.com |
From: sridhar i. <sri...@gm...> - 2010-02-23 17:41:00
|
hai I am devolping an activity recognition module using torch5(thanks a lot for this wonderful env.!!) . I was taking the tut for nn's. I have declared the datasets as per the tut. But everytime I issue the "trainer:train(dataset)" command it throws this error # StochasticGradient: training ...ogram Files/Torch5 0.8.0/share/lua/5.1/nn/Linear.lua:40: size mismatch stack traceback: [C]: in function 'addT2dotT1' ...ogram Files/Torch5 0.8.0/share/lua/5.1/nn/Linear.lua:40: in function 'forward' ...m Files/Torch5 0.8.0/share/lua/5.1/nn/Sequential.lua:26: in function 'forward' ...Torch5 0.8.0/share/lua/5.1/nn/StochasticGradient.lua:36: in function 'train' C:/Program Files/Torch5 0.8.0/simptutnn:23: in main chunk [C]: in function 'dofile' [string "dofile('C:/Program Files/Torch5 0.8.0/simpt..."]:1: in main chunk [C]: ? || finished after 0.359 seconds -- kindly help!! yours truly Krishnan -- Krishnan |
From: Franck L <fra...@gm...> - 2009-06-14 21:42:56
|
Hi all, I am very new to Torch5 and Lua, but experienced in C and other NN libraries (Fann, libann). First congratulations for the work, I like the modular way of Torch. I have been making some "tries" using some data previously used with Fann library. Just few results and questions/comments: - I use a dataset with 38 input (normalized ramge -1 +1) and 1 output (binary 1 - 0). I have about 300 000 samples in the dataset - A first simple test with 1 nn.Linear(38, 3), and a Tanh function using an MSECriterion for the error consume about 3 Mb of rmemory per iteration ! - I have used the trainer.train method and also built the "manual" way, I got the same results about memory leaks I wounder where is the leak located. I admit I am very new to Lua, but I guess this is located in the C code somewhere. Did you guys already noticed this ? Then few questions comparing Torch with other library, mainly the result of the MSE error . When I run the above described network (300 000 records, 38 input, 1 output) using a learning rate of 0.01 and no "decay", the error just seems to be the same after the second iteration. I can run about 200 iterations (before the memory is breaking the process) and the output error is still the same value. The only way to have the error value "moving" is to set a decay, tried with 0.01 and error is getting lower, but very slowly .. If I compare the error, on the same dataset with another library (Fann), the error is very different: Torch MSE error: about 0.18 after 100 iteration (and not decreasing), Fann MSE error: about 0.047 after 100 iterations and still decreasing ... Maybe I am not using Torch correctly, I have been playing with all samples and tried different functions, transient layers etc .. but I am still puzzled by the results. Any help/guidance would be greatly appreciated. Regards Franck |
From: Ronan C. <ro...@co...> - 2009-01-15 20:42:02
|
Hey (1) Due to some serious bugs in MapStorage, I had to modify the API. Sorry. MapStorage no longuer exists. It is now handled by a new constructor of Storage. Ex: torch.CharStorage(fileName, shared) If the optional argument "shared" is true, the mapping will be shared amongst all processes on the computer. In that case, the file must be with read-write access. Modifications made on the storage will then be reflected in the file. This corresponds to the old behavior of MapStorage. If "shared" is false (or not provided), the file must be at least readable. Modifications made on the storage will not affect the file. This fixes the bug reported by Gregoire about segfault when freeing a MapStorage. (2) A new shortcut to create a 1D Tensor over a Storage has been added. Ex: s = torch.DoubleStorage(100) t = torch.Tensor(s) -- in previous versions you had to do torch.Tensor(s, 1, s:size()) The doc has been updated accordingly for (1) and (2) in the SVN. Cheers Ronan. |
From: Ronan C. <ro...@co...> - 2008-09-16 22:24:41
|
Hey The SVN repository has been cleaned up. You will have to check it out again, I am sorry for the inconvenience. The instructions differ slightly than previous ones. It is now: svn checkout https://torch5.svn.sourceforge.net/svnroot/torch5/trunk torch (while before it was 'svn checkout https://torch5.svn.sourceforge.net/ svnroot/torch5/trunk/torch') The second argument 'torch' is the name you want for the directory. You can call it whatever you want. A contrib packages SVN repository has been created. It is optional, but you can checkout it with: cd torch svn checkout https://torch5-contrib.svn.sourceforge.net/svnroot/ torch5-contrib/trunk contrib Notes: (1) contrib packages should be checked out in torch main directory. They must be checked out in contrib in this main directory so that CMake can detect them. (2) to compile a contrib package, like "jpeg", you add - DWITH_CONTRIB_jpeg=1 when calling CMake. The package should be then compiled with the rest. (3) there is no trace of "external/" directory anymore in the torch5 SVN repository. Ronan. On Sep 16, 2008, at 4:46 PM, Ronan Collobert wrote: > the svn repository is still under re-arrangement. i just submitted > the new clean dump svn file to sourceforge, and the sourceforge > guys should update it in the next hours. please, i *insist* do not > commit or even update in the meantime. i will keep you updated. > > ronan. > |
From: Ronan C. <ro...@co...> - 2008-09-10 19:06:15
|
Hi, *** LuaT I made major changes in luaT, such that it handles better class inheritance. From the Lua side, this has little impact. From the C side, few functions changed. In particular luaT_newmetatable() now takes * a string for the name of the class * a string for the parent class (or NULL) (this was previously an id) * a constructor (or NULL) * a destructor (or NULL) * a factory (or NULL) Once a constructor/destructor or factory is set, it cannot be changed, avoiding messing up from Lua. Also, the true metatable is hidden from Lua now, you can see only the metatable containing methods (so you cannot messup constructor or __gc from there). New feature: all functions of a class can now be accessed in the following way: torch.Tensor.fill(x, 1) instead of x:fill(1). *** MapStorage As several people seemed interested in it, I added MapStorage in torch module. It works on the platforms we support. x = torch.DoubleMapStorage(filename) returns a kind of DoubleStorage which maps the contents of a file in memory. *** SVN The website has been updated. Now Torch should be officially retrieved from SVN. Ronan. |
From: Ronan C. <ro...@co...> - 2008-08-14 20:19:38
|
hi i had to cleanup luaT before everybody starts to use it, to be a bit more consistent with luaL. here are some naming changes: luaT_istorch became luaT_id luaT_getmetatable became luaT_pushmetatable luaT_typename became luaT_id2typename luaT_id became luaT_typename2id luaT_checkid became luaT_checktypename2id new functions: luaT_typerror and luaT_typename i changed the packages accordingly. should not bother you too much, as few of you were using luaT. documentation coming soon. ronan. |
From: Ronan C. <ro...@co...> - 2008-08-13 21:36:00
|
hey guys *** i made to major changes in torch 5.1 - added a versioning system in object saved on disk - changed Tensor and Storage sizes from "int" to "long" we can now use all the gigs of ram we want :) the models you saved previously should be loadable with the new torch, thanks to the versioning system. if you have problems loading an old model, please let me know. *** there is one incompatibility coming out of this: if you were creating a tensor with something like sz = torch.IntStorage(2); sz[1] = 4; sz[2] = 5 x = torch.Tensor(sz) this does not work anymore: you have to use LongStorage instead. i believe this was rare anyways, so it should not bother you too much. cheers ronan. |
From: Ronan C. <ro...@co...> - 2008-08-06 21:38:36
|
Guys I corrected a bug occurring in map*() and apply() functions (torch & lab package). This bug was appearing only if the function you provided as an argument returned sometimes a number, sometimes nothing (or nil). If it returned always a number or always nothing (or nil), the behavior was correct. I also corrected the doc concerning these functions. In particular, lab.apply() was previously renamed as lab.map() to be consistent with torch package functions, but the doc was not updated (and wrong). Cheers Ronan. |
From: Ronan C. <ro...@co...> - 2008-06-05 15:54:51
|
Hey [I] A lot of stuff concerning QT has been added in the CVS. Everything has been written by Leon Bottou. This allow to make easy (and powerful) graphics/GUI directly from Lua. Several packages have been written for that purpose: * qt: general utilities for interfacing Variants/Object/Classes from QT. Also functions for dealing with signals & slots. * qttorch: able to convert tensors to/from QT images * qtuiloader: can load QT GUI "ui" files (for e.g. written by the QT designer) * qtwidget: interface to QT widgets This packages are available if you run "qlua" (a variant of the "lua" program which knows about these packages) instead of "lua". Another Torch package has been created: * graphics: graphics drawing primitives interfaced to Cairo or QT. QT interface is used if QT is installed. CMake detects if you have QT 4.3 or higher installed, in which case it compiles these modules. Because of several annoying bugs in CMake, the minimum version required for CMake are now: - 2.6.0 under Windows (native & Cygwin) [Cygwin now contains this version, and under native Windows platform you can also (re)-install the easy installer) - 2.4.8 under MacOS X (update your MacPorts, or re-install with the easy Mac installer) - 2.4.7 under Linux The website has been updated accordingly, but more doc is coming soon. Please, let me know if you experience any problem. [II] Due to several complaints of people who did not installed fontconfig & freetype2 under Linux systems (and which got a compilation error because Cairo needs those to be compiled), the compilation of Cairo is not attempted under Linux if those packages are not installed. Ronan. |
From: Ronan C. <ro...@co...> - 2008-05-09 20:22:47
|
Hey So I cleaned up the torch package in Torch 5.1. I updated the doc as well (in the CVS, not on the website yet). You might be interested in some functions like map, map2, apply which were undocumented until now :) I also improved the overloaded operators behavior. Note: the order of arguments in map and map2 changed. I think it makes more sense like that. As probably nobody knew about them, it should be ok. I will soon commit the new website. Cheers, Ronan. |
From: Ronan C. <ro...@co...> - 2007-12-04 16:20:19
|
> How do you decide whether a class should be a C++ class or a Lua class? If Lua will not slow it down much, do it in Lua. Not only it is faster to program, but it is also more powerful: classes in Lua can use Lua and C++ classes, while classes in C++ can only use C++ classes... When you want to try quickly if an idea works, do it in Lua first as well... if it does not, no need to spend hours to debug your C++ code! > Or why do you have for example in the nn package a > StochasticGradient.lua instead of a StochasticGradient.cpp? StochasticGradient is basically a loop, which makes calls to functions. Not very CPU intensive! So Lua is great for that. It allows StochasticGradient to handle Lua GradientModules, in case you want to quickly (for trying some idea) design some dirty modules. You can also provide it some Lua datasets, which is extremely handy: as long as you have a Lua class which handles :size() and [i] StochasticGradient will handle it as a dataset! > And in the lab package you have a matelem.cpp AND a matelem.lua!? All lab functions use intensively numbers and thus are coded in C++. However, functions like rand() zeros() ... can take both integers like rand(5,2) or a IntStorage as an argument. With integers they have to handle variable numbers of arguments, and that is much easier to handle in Lua than in C++ (as low cost). So this top part is done in Lua, which then calls the underlying C++ function: once again, the CPU intensive part (well, the calculations, numerical stuff) is done in C++, and the rest in Lua. Ronan. |
From: Mikaela K. <mik...@gm...> - 2007-12-04 14:33:08
|
Hi, I suppose this is (will be) a very common question... Lets have it documented. How do you decide whether a class should be a C++ class or a Lua class? Or why do you have for example in the nn package a StochasticGradient.lua instead of a StochasticGradient.cpp? And in the lab package you have a matelem.cpp AND a matelem.lua!? Thanks, Mika |
From: Ronan C. <ro...@co...> - 2007-11-28 20:44:10
|
Hey Jason pointed me out that Concat in nn package was not working. In fact, it was really buggy -- the implementation was not even finished. I corrected it, and improved it. l = nn.Concat([dim]) takes now an optional dim argument: the dimension along which you want to concatenate (default 1) [that is new, before it was 1 with no option]. each modules added in Concat (with l:add(...)) must take the same number of inputs. their outputs are concatenated (along dim given in the constructor) and returned by the Concat layer. you can feed tensors of any dimension to Concat (provided that the modules inside understand them). the outputed dimensions of each module might be different for the dimension "dim" but must (of course!) be the same in other dimensions. ronan. |
From: Ronan C. <ro...@co...> - 2007-11-20 22:42:10
|
hey i added: - apply() in Tensor. x:apply(func) applies func to all element of x, and returns x. a bit like lab.apply() but no memory creation. - lab.range(xmin, xmax [, step]) returns a Tensor with values between xmin (included) and xmax (possibly included, depending on the step). steps between each value is 1 (default) or step (if given). - lab.randperm(n) as in matlab, returns a Tensor containing a permutations of the integers 1..n. Similar to random.shuffledIndices() (but shuffledIndices() returns a Storage) ronan. |
From: Ronan C. <ro...@co...> - 2007-11-20 21:26:17
|
Hey ---- disclaimer i created a torch5-devel list. so if you want to discuss something on the development of torch, write here. new added features should also be advertised here. i will not send any mail personally anymore, so i strongly encourage you to subscribe to the list if you are interested by this kind of news. https://lists.sourceforge.net/lists/listinfo/torch5-devel ---- end of disclaimer I added in lab package the lab.apply() function. Suppose you have a 1000x10000 random tensor and you want to apply a tanh on it. You can: x = lab.rand(1000,10000) -- superfast, takes 0.7s z = lab.tanh(x) -- superslow, takes 44.7s z = torch.Tensor():resizeAs(x) for i=1,x:size(1) do for j=1,x:size(2) do z[i][j] = math.tanh(x[i][j]) end end -- superslow, takes 15.5s -- we can do it here, because we know the tensor is contiguous in memory z = torch.Tensor():resizeAs(x) local xs = x:storage() local zs = z:storage() for i=1,xs:size() do zs[i] = math.tanh(xs[i]) end -- supercool, takes 1.4s z = lab.apply(x, math.tanh) So this lab.apply() function is extremely efficient (considering it does lua calls underneath) and should be used instead of loops whenever you can. (well, except of course if an optimized C function already exists like for tanh()) Ronan. |