Learning Shift Invariant Features

2009-09-27
2012-11-08
  • Nick Segievskiy

    Nick Segievskiy - 2009-09-27

    Hello.
    I read some papers about autoassociative neural net on CNN:
    "Unsupervised Learning of Invariant Feature Hierarchies with Applications
    to Object Recognition"
    "Efficient Learning of Sparse Representations with an Energy-
    BasedModel"
    And I think its great.

    How i can create this topology of CNN in lush? How can i organize centre
    (invariant feature layer) 2D layer?

    Thanking you in anticipation.

     
  • koray kavukcuoglu

    Hi,

    I am including some information and code about a recent work that we have
    done. It is based on the two papers you have mentioned. This code does not
    include the scripts for building a multi-layer network, only training one
    layer unsupervised, but actually all the necessary library to do so is
    included. I am working on releasing a more complete version.

    You can download the PSD code from the following link. After you
    install Lush on a Linux box this should run without any problems. The
    algorithm is explained in more detail in the following tech report
    (http://cs.nyu.edu/~koray/publis/koray-
    psd-08.pdf).

    http://cs.nyu.edu/~koray/files/PSD.tar.gz

    All the inputs are defined in a file named psd/code/input.
    You can change parameters of the system from that file.

    It is 78MB, because I have also included training data (images only).
    Just extract it follow these steps (assuming you already have Lush
    installed).

    > tar xvzf PSD.tar.gz
    > cd PSD/psd/code
    > lush

    …..

    ? ^L run
    This will take some time to finish (10-20 min)
    …….
    ……
    ……

    ? ^L util


    ? (show-machine 2 8 8 "exper1") ;;;; left encoder, right decoder.
    encoder should be noisy due to early stopping
    ? (generate-log-pics "exper1")
    ? Ctrl-D

    > display exper1/curves-exper1.png

    These steps will train the system on grayscale image patches and show
    you what it learned and some insight into how the energy is
    lowered along training.

    Thanks,
    koray

     
  • Nick Segievskiy

    Nick Segievskiy - 2009-10-12

    Thank you, koraykv.
    Thats i need. Now i try do my experiments (faces, office piece of
    furniture). And learn your nice system (organize of sparse codes, optimize
    the feature map). =)

     
  • sergieta

    sergieta - 2010-11-09

    Hello,
    How can i use psd (ipsd) code.
    1) How can i visualize reconstructed test image (through psd filters)
    2) How to create feature vectors from test data

    I try calculate feature vector and reconstructed data:
    (libload "libimage/image-io")
    (libload "libimage/ubimage")

    (addpath "eblearn/code")
    (libload "ebm")
    (libload "data-source-berkeley.lsh")
    (libload "psd-logger.lsh")
    (libload "psd-trainer.lsh")
    (libload "util.lsh")
    ;;;;;;;;;;;;;;;;;;;;;;; load my data;;;;;;;;;;;;;;;;;;;
    (setq fdata_name "data-")
    (setq img_dir "/media/Data2/ImageDB/letters/test/28x28seq2/")

    (setq dir (files img_dir))
    (setq img (ubyte-matrix (- (length dir) 2) 28 28))
    (setq lbl (ubyte-matrix (- (length dir) 2)))
    (let* ((k 0))
    (for (i 0 (- (length dir) 1))
    (setq fname (nth i dir))
    (when (= ".png" (right fname 4))

    (copy-matrix (image-read-ubim (concat img_dir fname)) (select img 0 k)
    )
    (lbl k 0)
    (incr k)
    )
    )
    )

    (save-matrix img (concat fdata_name "img.mat"))
    ;;;;;;;;;;;;;;;; init & load psd vars;;;;;;;;;;;;;;;;;;;;;;;;;
    (reading "machine1.obj" (setq machine (bread)))

    (defparameter testdata img)
    (defparameter data
    (new dsource-berkeley-hv-lm testdata 0.0167 20 12 12))
    (defparameter input (new state-idx3 1 1 1))
    (defparameter output (new state-idx3 1 1 1))
    (setq energy :machine:enc-energy)
    (==> data seek 0)

    ;;;;;;;;;;;;;;;;;;;;run psd on test data;;;;;;;;;;;;;
    (setq niter 10)
    (for (iiter 0 (- niter 1))
    ((-int-) iiter)
    (==> data fprop input output)
    (==> machine fprop input output energy)
    (==> data next)
    ^P:machine:fgc:dec-out:x
    ^P:machine:fgc:enc-out:x
    ;;^P:machine:enc-out:x
    )

    but i receive the error that I cant understand
    *** lisp_c runtime error: specified subscript is too large
    ** in: C_fprop_C_back_convolution_module
    ** in: C_fprop_C_fg_codec

    *** ==> : Run-time error in compiled code

     
  • sergieta

    sergieta - 2010-11-09

    I solve my problem =)

    ;;;;;;;;;;;;;;;;;;;;run psd on test data;;;;;;;;;;;;;
    (setq niter 10)
    (for (k 0 (- niter 1))
    (copy-matrix (select img 0 k) :input:x )
    (==> :machine:encoder fprop input code)
    (==> :machine:decoder fprop code output)

    (setq img_out (select :output:x 0 0))
    (new-window 28 28)
    (gray-draw-matrix 0 0 img_out 1 0 1 1)
    (save-window-as-ppm (concat "out/"(str k) "-out.ppm"))

    (setq img_out (select :input:x 0 0))
    (new-window 28 28)
    (gray-draw-matrix 0 0 img_out 1 0 1 1)
    (save-window-as-ppm (concat "out/"(str k) "-in.ppm"))
    )

    the reconstructed images looks good, now try to teach NN on psd-code

     
  • Yann LeCun

    Yann LeCun - 2010-11-11

    Good for you!

    -- Yann

     
  • sergieta

    sergieta - 2010-11-19

    Hello
    I do some experiments with PSD and IPSD, but my experiments fails. i try learn
    MNIST data 28x28->PSD->256x17x17->subsample->6400 i have more dimencions of my
    data i hope that more linerity and sparsety… then i learn this data on RBF_NN
    and have no good result =(
    What i need to do with PSD? How can i make PSD more sparsety?

     

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks