You can subscribe to this list here.
2004 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}
(1) 
_{Dec}


2005 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}
(2) 
_{Nov}

_{Dec}
(1) 
2006 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}
(3) 
_{Jun}
(1) 
_{Jul}
(3) 
_{Aug}
(8) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

2007 
_{Jan}
(1) 
_{Feb}

_{Mar}
(1) 
_{Apr}

_{May}
(2) 
_{Jun}
(3) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(15) 
_{Oct}
(4) 
_{Nov}

_{Dec}

2008 
_{Jan}
(10) 
_{Feb}
(2) 
_{Mar}

_{Apr}

_{May}
(7) 
_{Jun}
(4) 
_{Jul}
(6) 
_{Aug}
(12) 
_{Sep}

_{Oct}
(3) 
_{Nov}
(13) 
_{Dec}
(10) 
2009 
_{Jan}
(12) 
_{Feb}
(19) 
_{Mar}
(27) 
_{Apr}

_{May}
(6) 
_{Jun}
(9) 
_{Jul}

_{Aug}
(5) 
_{Sep}
(12) 
_{Oct}
(20) 
_{Nov}
(1) 
_{Dec}
(8) 
2010 
_{Jan}
(5) 
_{Feb}
(8) 
_{Mar}
(3) 
_{Apr}
(4) 
_{May}
(3) 
_{Jun}
(12) 
_{Jul}
(22) 
_{Aug}
(19) 
_{Sep}
(7) 
_{Oct}
(7) 
_{Nov}
(7) 
_{Dec}
(21) 
2011 
_{Jan}
(10) 
_{Feb}
(18) 
_{Mar}
(26) 
_{Apr}
(12) 
_{May}

_{Jun}
(3) 
_{Jul}
(6) 
_{Aug}
(11) 
_{Sep}
(19) 
_{Oct}
(32) 
_{Nov}
(31) 
_{Dec}
(27) 
2012 
_{Jan}
(8) 
_{Feb}
(5) 
_{Mar}
(19) 
_{Apr}
(3) 
_{May}
(3) 
_{Jun}
(14) 
_{Jul}
(15) 
_{Aug}
(3) 
_{Sep}
(14) 
_{Oct}
(7) 
_{Nov}
(6) 
_{Dec}
(36) 
2013 
_{Jan}
(18) 
_{Feb}
(8) 
_{Mar}
(22) 
_{Apr}
(4) 
_{May}
(18) 
_{Jun}
(16) 
_{Jul}
(9) 
_{Aug}
(8) 
_{Sep}
(4) 
_{Oct}
(6) 
_{Nov}
(1) 
_{Dec}
(3) 
2014 
_{Jan}
(5) 
_{Feb}
(3) 
_{Mar}
(5) 
_{Apr}
(6) 
_{May}
(2) 
_{Jun}

_{Jul}
(4) 
_{Aug}
(4) 
_{Sep}
(7) 
_{Oct}
(3) 
_{Nov}
(5) 
_{Dec}
(3) 
2015 
_{Jan}
(1) 
_{Feb}

_{Mar}

_{Apr}
(1) 
_{May}
(2) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1

2

3

4

5

6

7

8

9
(1) 
10

11

12

13
(2) 
14
(2) 
15
(3) 
16

17

18
(1) 
19

20
(2) 
21
(1) 
22
(2) 
23

24

25

26
(1) 
27

28
(1) 
29
(3) 
30
(1) 
31
(2) 
From: Pietro Berkes <berkes@ga...>  20100731 13:29:56

Hi Thorton, the problem is that one of the column has constant value. The covariance matrix ends up being singular, and the algorithm panics. I agree it should raise a more meaningful error, I'll fix that as soon as possible. To diagnose this kind of problem, you can type numpy.cov(x.T) As you will see, one column and one row are set to 0, showing that the covariance matrix is singular. As a fix, you can remove columns with constant values. Another possibility is to do a preliminary PCA in order to reduce the dimensionality by the number of dimensions with zero variance. Best, P. On Sat, Jul 31, 2010 at 12:26 AM, Thorton Timms <mightythorton@...> wrote: > The following example gives a "ValueError: scale <=0" > if I change one of the values in the second column to a 0 then the error > goes away. However, my real data gives this error too. It is too big to > post and I don't want to superficially modify it to make it work. > > Any ideas on how to get around the problem? > > Thanks, > Thorton > > > import mdp > from numpy import * > x = array([[ 1., 1., 0., 0., 0.], > [ 0., 1., 1., 0., 0.], > [ 0., 1., 0., 0., 0.], > [ 0., 1., 1., 0., 0.], > [ 0., 1., 0., 0., 1.], > [ 0., 1., 0., 1., 0.], > [ 0., 1., 0., 0., 0.], > [ 1., 1., 0., 0., 0.], > [ 1., 1., 0., 0., 0.], > [ 0., 1., 0., 1., 1.]]) > > fanode = mdp.nodes.FANode(output_dim=3) > fanode.train(x) > fanode.stop_trainging() > >  > The Palm PDK Hot Apps Program offers developers who use the > PlugIn Development Kit to bring their C/C++ apps to Palm for a share > of $1 Million in cash or HP Products. Visit us here for more details: > http://p.sf.net/sfu/dev2devpalm > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > 
From: Thorton Timms <mightythorton@gm...>  20100731 04:26:13

The following example gives a "ValueError: scale <=0" if I change one of the values in the second column to a 0 then the error goes away. However, my real data gives this error too. It is too big to post and I don't want to superficially modify it to make it work. Any ideas on how to get around the problem? Thanks, Thorton import mdp from numpy import * x = array([[ 1., 1., 0., 0., 0.], [ 0., 1., 1., 0., 0.], [ 0., 1., 0., 0., 0.], [ 0., 1., 1., 0., 0.], [ 0., 1., 0., 0., 1.], [ 0., 1., 0., 1., 0.], [ 0., 1., 0., 0., 0.], [ 1., 1., 0., 0., 0.], [ 1., 1., 0., 0., 0.], [ 0., 1., 0., 1., 1.]]) fanode = mdp.nodes.FANode(output_dim=3) fanode.train(x) fanode.stop_trainging() 
From: Thorton Timms <mightythorton@gm...>  20100730 17:40:17

Great, thanks for the info. I read through Linearl Models chapter. One more quick clarification, please. If I want to identify which of the original factors are contribributing most to the new factors from the analysis, can I use the values in the Ey matrix? Where the values in that matrix that are the furthest from 0 (positive or negative) are the ones that contribute the most. So that the values in a row that are relatively far from 0 would identify an original factor that contributes the most to the new factor identified by the column of the matrix containing that value. Is that correct? (I'm I explaining myself clearly?) Thanks again for all the help, Thorton On Thu, Jul 29, 2010 at 11:22 AM, Pietro Berkes <berkes@...>wrote: > On Mon, Jul 26, 2010 at 6:45 PM, Thorton Timms <mightythorton@...> > wrote: > > Hello, > > Thanks for the help. I reviewed Olivier's posts from here: > > > http://sourceforge.net/mailarchive/forum.php?thread_name=AANLkTinWvuISIdqmfXgiiOoVmpotj6mcvwlSRpO1Qj9%40mail.gmail.com&forum_name=mdptoolkitusers > > > > Unfortunately, I'm still a little lost. > > > > 1) How do I get the "factor_array" from the FANode (is it the > > FANode.E_y_mtx)? I've initiated the FANode, trained it with my data, and > > stoped_the training. Then what? > > If you mean the coefficient of the factors, then you need to execute > the node with your data. If you mean the factor themselves, then yes, > it's the E_y_mtx array. > >  I meant the "factor_array" mentioned by Michael. I don't know what array that is and I couldn't find it in the code, but it was suggested that it could identify the factors. I'm not sure it is relavent based on your other comments.  > > > > 2) From Olivier's post I can see how to make a good guess at the number > of > > components by plotting the eigenvalues PCANode.d. Would evaluating the > > sigma values in the FANode be equivalent? > > No, the sigma values correspond to the estimated noise in the input > space on the various axes. To find something equivalent to the > eigenvalues of PCA, you have to look at the length of the factors. See > the Chapter "Linear models" at > http://www.ics.uci.edu/~welling/classnotes/classnotes.html . > > > > > 3) With the PCA it still isn't clear to me how to figure out the > > components. Isn't there a matrix that gives a scaler value for each of > the > > original factors with which can be used to calculate the value for each > of > > the components? > > I think you mean the execute method. > Best, > Pietro > > > > Thanks for the help, > > Thorton > > > > On Thu, Jul 22, 2010 at 12:04 AM, Michael Sarahan <mcsarahan@... > > > > wrote: > >> > >> Check out Olivier's excellent guide to face recognition, posted to > >> this list a few days ago. > >> > >> As for a reduced set of factors: you must decide what number of > >> factors to keep. I use scree plots for this  others may have > >> different suggestions. Scree plots are plots of the eigenvalues vs > >> their index. Where the eigenvalues start to decay linearly denotes > >> the transition to factors that probably represent noise. Then it is > >> simple to reduce the set of eigenvectors/factors by slicing the array. > >> Since the factors are in columns of the array, that would be a line > >> like this: > >> > >> factor_array[:,:10] > >> > >> to keep only the first 10 factors. > >> > >> This is also discussed in Olivier's guide, but in the context of > >> images. He reshapes the 1D array slice result from each factor into > >> the original image size to make a 2D image. > >> > >> On 21 July 2010 23:20, Thorton Timms <mightythorton@...> wrote: > >> > I know this may seem a bit basic, but how are the results from the PCA > >> > and > >> > FA interpreted? > >> > How do I turn the returned array into a reduced set of factors? > >> > > >> > Thanks, > >> > Thorton > >> > > 
From: Pietro Berkes <berkes@ga...>  20100729 18:28:12

On Thu, Jul 22, 2010 at 1:55 PM, Garima Garima <garima646@...> wrote: > > I'll try these. Thanks. > > Also I had a doubt about for the face recognition approach you told, is the > dimensions of the 2d array 'faces' is (4096, 1000) ? > When I am applying pca.train(faces) on this, the dimensions of eigenfaces > I'm getting is (100, 100) (eigenfaces.shape), which gives me wrong > eigenfaces. not (1000, 1000)? it would be helpful if you could post your code. > And when I try training with face array of dimensions (1000, 4096), I get > 'memory error'. > Can you please help me find out where I am doing wrong? I think this is the correct orientation of the arrays: the data set has got 1000 faces in it, right? How much memory do you have on your machine? You can try defining the PCANode with the argument dtype='float32' to use single precision. Otherwise, try removing from memory other unnecessary data. Best, Pietro > ________________________________ > From: Olivier Grisel <olivier.grisel@...> > To: MDP users mailing list <mdptoolkitusers@...> > Sent: Tue, July 20, 2010 11:23:58 AM > Subject: Re: [mdptoolkitusers] Problem with recognition using PCANode. > > 2010/7/20 Garima Garima <garima646@...>: >> >> Also, I am trying to do scene recognition specifically, so I am trying to >> recognize a a scene to tell whether a place is new or already visited. For >> this I am trying to apply the same process as for face recognition, but if >> you can help me with anything which is required for this, that will be >> helpful too. > > I think this is still an open problem. I don't think there is any > turnkey algorithm that will wok out of the box for this. You can try > to do some experiments with image descriptors such as GIST ( > http://pypi.python.org/pypi/pyleargist ) or SIFT or SURF (available in > opencv for instance). > >  > Olivier > http://twitter.com/ogrisel  http://github.com/ogrisel > >  > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > >  > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > 
From: Pietro Berkes <berkes@ga...>  20100729 18:22:56

On Mon, Jul 26, 2010 at 6:45 PM, Thorton Timms <mightythorton@...> wrote: > Hello, > Thanks for the help. I reviewed Olivier's posts from here: > http://sourceforge.net/mailarchive/forum.php?thread_name=AANLkTinWvuISIdqmfXgiiOoVmpotj6mcvwlSRpO1Qj9%40mail.gmail.com&forum_name=mdptoolkitusers > > Unfortunately, I'm still a little lost. > > 1) How do I get the "factor_array" from the FANode (is it the > FANode.E_y_mtx)? I've initiated the FANode, trained it with my data, and > stoped_the training. Then what? If you mean the coefficient of the factors, then you need to execute the node with your data. If you mean the factor themselves, then yes, it's the E_y_mtx array. > > 2) From Olivier's post I can see how to make a good guess at the number of > components by plotting the eigenvalues PCANode.d. Would evaluating the > sigma values in the FANode be equivalent? No, the sigma values correspond to the estimated noise in the input space on the various axes. To find something equivalent to the eigenvalues of PCA, you have to look at the length of the factors. See the Chapter "Linear models" at http://www.ics.uci.edu/~welling/classnotes/classnotes.html . > > 3) With the PCA it still isn't clear to me how to figure out the > components. Isn't there a matrix that gives a scaler value for each of the > original factors with which can be used to calculate the value for each of > the components? I think you mean the execute method. Best, Pietro > Thanks for the help, > Thorton > > On Thu, Jul 22, 2010 at 12:04 AM, Michael Sarahan <mcsarahan@...> > wrote: >> >> Check out Olivier's excellent guide to face recognition, posted to >> this list a few days ago. >> >> As for a reduced set of factors: you must decide what number of >> factors to keep. I use scree plots for this  others may have >> different suggestions. Scree plots are plots of the eigenvalues vs >> their index. Where the eigenvalues start to decay linearly denotes >> the transition to factors that probably represent noise. Then it is >> simple to reduce the set of eigenvectors/factors by slicing the array. >> Since the factors are in columns of the array, that would be a line >> like this: >> >> factor_array[:,:10] >> >> to keep only the first 10 factors. >> >> This is also discussed in Olivier's guide, but in the context of >> images. He reshapes the 1D array slice result from each factor into >> the original image size to make a 2D image. >> >> On 21 July 2010 23:20, Thorton Timms <mightythorton@...> wrote: >> > I know this may seem a bit basic, but how are the results from the PCA >> > and >> > FA interpreted? >> > How do I turn the returned array into a reduced set of factors? >> > >> > Thanks, >> > Thorton >> > >> >  >> > This SF.net email is sponsored by Sprint >> > What will you do first with EVO, the first 4G phone? >> > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst >> > _______________________________________________ >> > mdptoolkitusers mailing list >> > mdptoolkitusers@... >> > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers >> > >> > >> >> >>  >> This SF.net email is sponsored by Sprint >> What will you do first with EVO, the first 4G phone? >> Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst >> _______________________________________________ >> mdptoolkitusers mailing list >> mdptoolkitusers@... >> https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > >  > The Palm PDK Hot Apps Program offers developers who use the > PlugIn Development Kit to bring their C/C++ apps to Palm for a share > of $1 Million in cash or HP Products. Visit us here for more details: > http://ad.doubleclick.net/clk;226879339;13503038;l? > http://clk.atdmt.com/CRS/go/247765532/direct/01/ > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > 
From: Pietro Berkes <berkes@ga...>  20100729 17:56:39

Hi Thorton! On Wed, Jul 28, 2010 at 6:57 PM, Thorton Timms <mightythorton@...> wrote: > Hello again, > > I've been experimenting with the FANode some more. > Can the "output_dim" parameter be used to specify the number of factors? > > if I do a simple training like: > x = array([[ 1., 1., 0., 0., 0.], > [ 0., 1., 1., 0., 0.], > [ 0., 0., 0., 0., 0.], > [ 0., 1., 1., 0., 0.], > [ 0., 1., 0., 0., 1.], > [ 0., 1., 0., 1., 0.], > [ 0., 1., 0., 0., 0.], > [ 1., 1., 0., 0., 0.], > [ 1., 1., 0., 0., 0.], > [ 0., 1., 0., 1., 1.]]) > fanode = mdp.nodes.FANode(output_dim=3) > fanode.train(x) > fanode.stop_training() > result = fanode.execute(x) > Will this process create an FA with 3 factors? Yes, it will > Will the "result" object contain the new data with the factor values already > calculated? Precisely! Best, Pietro > > > I know the mailing list is probably use to people with a fair bit more math > background than I am. I'm attempting to do a summer project for > Highschool. Any help would be appreciated. > > Thanks, > Thorton > > > On Mon, Jul 26, 2010 at 3:45 PM, Thorton Timms <mightythorton@...> > wrote: >> >> Hello, >> Thanks for the help. I reviewed Olivier's posts from here: >> >> http://sourceforge.net/mailarchive/forum.php?thread_name=AANLkTinWvuISIdqmfXgiiOoVmpotj6mcvwlSRpO1Qj9%40mail.gmail.com&forum_name=mdptoolkitusers >> >> Unfortunately, I'm still a little lost. >> >> 1) How do I get the "factor_array" from the FANode (is it the >> FANode.E_y_mtx)? I've initiated the FANode, trained it with my data, and >> stoped_the training. Then what? >> >> 2) From Olivier's post I can see how to make a good guess at the number of >> components by plotting the eigenvalues PCANode.d. Would evaluating the >> sigma values in the FANode be equivalent? >> >> 3) With the PCA it still isn't clear to me how to figure out the >> components. Isn't there a matrix that gives a scaler value for each of the >> original factors with which can be used to calculate the value for each of >> the components? >> Thanks for the help, >> Thorton >> >> On Thu, Jul 22, 2010 at 12:04 AM, Michael Sarahan <mcsarahan@...> >> wrote: >>> >>> Check out Olivier's excellent guide to face recognition, posted to >>> this list a few days ago. >>> >>> As for a reduced set of factors: you must decide what number of >>> factors to keep. I use scree plots for this  others may have >>> different suggestions. Scree plots are plots of the eigenvalues vs >>> their index. Where the eigenvalues start to decay linearly denotes >>> the transition to factors that probably represent noise. Then it is >>> simple to reduce the set of eigenvectors/factors by slicing the array. >>> Since the factors are in columns of the array, that would be a line >>> like this: >>> >>> factor_array[:,:10] >>> >>> to keep only the first 10 factors. >>> >>> This is also discussed in Olivier's guide, but in the context of >>> images. He reshapes the 1D array slice result from each factor into >>> the original image size to make a 2D image. >>> >>> On 21 July 2010 23:20, Thorton Timms <mightythorton@...> wrote: >>> > I know this may seem a bit basic, but how are the results from the PCA >>> > and >>> > FA interpreted? >>> > How do I turn the returned array into a reduced set of factors? >>> > >>> > Thanks, >>> > Thorton >>> > >>> >  >>> > This SF.net email is sponsored by Sprint >>> > What will you do first with EVO, the first 4G phone? >>> > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst >>> > _______________________________________________ >>> > mdptoolkitusers mailing list >>> > mdptoolkitusers@... >>> > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers >>> > >>> > >>> >>> >>>  >>> This SF.net email is sponsored by Sprint >>> What will you do first with EVO, the first 4G phone? >>> Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst >>> _______________________________________________ >>> mdptoolkitusers mailing list >>> mdptoolkitusers@... >>> https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers >> > > >  > The Palm PDK Hot Apps Program offers developers who use the > PlugIn Development Kit to bring their C/C++ apps to Palm for a share > of $1 Million in cash or HP Products. Visit us here for more details: > http://p.sf.net/sfu/dev2devpalm > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > 
From: Thorton Timms <mightythorton@gm...>  20100728 22:57:32

Hello again, I've been experimenting with the FANode some more. Can the "output_dim" parameter be used to specify the number of factors? if I do a simple training like: x = array([[ 1., 1., 0., 0., 0.], [ 0., 1., 1., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 1., 1., 0., 0.], [ 0., 1., 0., 0., 1.], [ 0., 1., 0., 1., 0.], [ 0., 1., 0., 0., 0.], [ 1., 1., 0., 0., 0.], [ 1., 1., 0., 0., 0.], [ 0., 1., 0., 1., 1.]]) fanode = mdp.nodes.FANode(output_dim=3) fanode.train(x) fanode.stop_training() result = fanode.execute(x) Will this process create an FA with 3 factors? Will the "result" object contain the new data with the factor values already calculated? I know the mailing list is probably use to people with a fair bit more math background than I am. I'm attempting to do a summer project for Highschool. Any help would be appreciated. Thanks, Thorton On Mon, Jul 26, 2010 at 3:45 PM, Thorton Timms <mightythorton@...>wrote: > Hello, > > Thanks for the help. I reviewed Olivier's posts from here: > > http://sourceforge.net/mailarchive/forum.php?thread_name=AANLkTinWvuISIdqmfXgiiOoVmpotj6mcvwlSRpO1Qj9%40mail.gmail.com&forum_name=mdptoolkitusers > > Unfortunately, I'm still a little lost. > > 1) How do I get the "factor_array" from the FANode (is it the > FANode.E_y_mtx)? I've initiated the FANode, trained it with my data, and > stoped_the training. Then what? > > 2) From Olivier's post I can see how to make a good guess at the number of > components by plotting the eigenvalues PCANode.d. Would evaluating the > sigma values in the FANode be equivalent? > > 3) With the PCA it still isn't clear to me how to figure out the > components. Isn't there a matrix that gives a scaler value for each of the > original factors with which can be used to calculate the value for each of > the components? > Thanks for the help, > Thorton > > On Thu, Jul 22, 2010 at 12:04 AM, Michael Sarahan <mcsarahan@...>wrote: > >> Check out Olivier's excellent guide to face recognition, posted to >> this list a few days ago. >> >> As for a reduced set of factors: you must decide what number of >> factors to keep. I use scree plots for this  others may have >> different suggestions. Scree plots are plots of the eigenvalues vs >> their index. Where the eigenvalues start to decay linearly denotes >> the transition to factors that probably represent noise. Then it is >> simple to reduce the set of eigenvectors/factors by slicing the array. >> Since the factors are in columns of the array, that would be a line >> like this: >> >> factor_array[:,:10] >> >> to keep only the first 10 factors. >> >> This is also discussed in Olivier's guide, but in the context of >> images. He reshapes the 1D array slice result from each factor into >> the original image size to make a 2D image. >> >> On 21 July 2010 23:20, Thorton Timms <mightythorton@...> wrote: >> > I know this may seem a bit basic, but how are the results from the PCA >> and >> > FA interpreted? >> > How do I turn the returned array into a reduced set of factors? >> > >> > Thanks, >> > Thorton >> > >>  >> > This SF.net email is sponsored by Sprint >> > What will you do first with EVO, the first 4G phone? >> > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst >> > _______________________________________________ >> > mdptoolkitusers mailing list >> > mdptoolkitusers@... >> > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers >> > >> > >> >> >>  >> This SF.net email is sponsored by Sprint >> What will you do first with EVO, the first 4G phone? >> Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst >> _______________________________________________ >> mdptoolkitusers mailing list >> mdptoolkitusers@... >> https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers >> > > 
From: Thorton Timms <mightythorton@gm...>  20100726 22:45:18

Hello, Thanks for the help. I reviewed Olivier's posts from here: http://sourceforge.net/mailarchive/forum.php?thread_name=AANLkTinWvuISIdqmfXgiiOoVmpotj6mcvwlSRpO1Qj9%40mail.gmail.com&forum_name=mdptoolkitusers Unfortunately, I'm still a little lost. 1) How do I get the "factor_array" from the FANode (is it the FANode.E_y_mtx)? I've initiated the FANode, trained it with my data, and stoped_the training. Then what? 2) From Olivier's post I can see how to make a good guess at the number of components by plotting the eigenvalues PCANode.d. Would evaluating the sigma values in the FANode be equivalent? 3) With the PCA it still isn't clear to me how to figure out the components. Isn't there a matrix that gives a scaler value for each of the original factors with which can be used to calculate the value for each of the components? Thanks for the help, Thorton On Thu, Jul 22, 2010 at 12:04 AM, Michael Sarahan <mcsarahan@...>wrote: > Check out Olivier's excellent guide to face recognition, posted to > this list a few days ago. > > As for a reduced set of factors: you must decide what number of > factors to keep. I use scree plots for this  others may have > different suggestions. Scree plots are plots of the eigenvalues vs > their index. Where the eigenvalues start to decay linearly denotes > the transition to factors that probably represent noise. Then it is > simple to reduce the set of eigenvectors/factors by slicing the array. > Since the factors are in columns of the array, that would be a line > like this: > > factor_array[:,:10] > > to keep only the first 10 factors. > > This is also discussed in Olivier's guide, but in the context of > images. He reshapes the 1D array slice result from each factor into > the original image size to make a 2D image. > > On 21 July 2010 23:20, Thorton Timms <mightythorton@...> wrote: > > I know this may seem a bit basic, but how are the results from the PCA > and > > FA interpreted? > > How do I turn the returned array into a reduced set of factors? > > > > Thanks, > > Thorton > > >  > > This SF.net email is sponsored by Sprint > > What will you do first with EVO, the first 4G phone? > > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst > > _______________________________________________ > > mdptoolkitusers mailing list > > mdptoolkitusers@... > > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > > > > > >  > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > 
From: Garima Garima <garima646@ym...>  20100722 17:55:47

I'll try these. Thanks. Also I had a doubt about for the face recognition approach you told, is the dimensions of the 2d array 'faces' is (4096, 1000) ? When I am applying pca.train(faces) on this, the dimensions of eigenfaces I'm getting is (100, 100) (eigenfaces.shape), which gives me wrong eigenfaces. And when I try training with face array of dimensions (1000, 4096), I get 'memory error'. Can you please help me find out where I am doing wrong? ________________________________ From: Olivier Grisel <olivier.grisel@...> To: MDP users mailing list <mdptoolkitusers@...> Sent: Tue, July 20, 2010 11:23:58 AM Subject: Re: [mdptoolkitusers] Problem with recognition using PCANode. 2010/7/20 Garima Garima <garima646@...>: > > Also, I am trying to do scene recognition specifically, so I am trying to > recognize a a scene to tell whether a place is new or already visited. For > this I am trying to apply the same process as for face recognition, but if > you can help me with anything which is required for this, that will be > helpful too. I think this is still an open problem. I don't think there is any turnkey algorithm that will wok out of the box for this. You can try to do some experiments with image descriptors such as GIST ( http://pypi.python.org/pypi/pyleargist ) or SIFT or SURF (available in opencv for instance).  Olivier http://twitter.com/ogrisel  http://github.com/ogrisel  This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first http://p.sf.net/sfu/sprintcomfirst _______________________________________________ mdptoolkitusers mailing list mdptoolkitusers@... https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers 
From: Michael Sarahan <mcsarahan@uc...>  20100722 07:35:59

Check out Olivier's excellent guide to face recognition, posted to this list a few days ago. As for a reduced set of factors: you must decide what number of factors to keep. I use scree plots for this  others may have different suggestions. Scree plots are plots of the eigenvalues vs their index. Where the eigenvalues start to decay linearly denotes the transition to factors that probably represent noise. Then it is simple to reduce the set of eigenvectors/factors by slicing the array. Since the factors are in columns of the array, that would be a line like this: factor_array[:,:10] to keep only the first 10 factors. This is also discussed in Olivier's guide, but in the context of images. He reshapes the 1D array slice result from each factor into the original image size to make a 2D image. On 21 July 2010 23:20, Thorton Timms <mightythorton@...> wrote: > I know this may seem a bit basic, but how are the results from the PCA and > FA interpreted? > How do I turn the returned array into a reduced set of factors? > > Thanks, > Thorton >  > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > 
From: Thorton Timms <mightythorton@gm...>  20100721 22:21:00

I know this may seem a bit basic, but how are the results from the PCA and FA interpreted? How do I turn the returned array into a reduced set of factors? Thanks, Thorton 
From: Olivier Grisel <olivier.grisel@en...>  20100720 15:24:25

2010/7/20 Garima Garima <garima646@...>: > > Also, I am trying to do scene recognition specifically, so I am trying to > recognize a a scene to tell whether a place is new or already visited. For > this I am trying to apply the same process as for face recognition, but if > you can help me with anything which is required for this, that will be > helpful too. I think this is still an open problem. I don't think there is any turnkey algorithm that will wok out of the box for this. You can try to do some experiments with image descriptors such as GIST ( http://pypi.python.org/pypi/pyleargist ) or SIFT or SURF (available in opencv for instance).  Olivier http://twitter.com/ogrisel  http://github.com/ogrisel 
From: Garima Garima <garima646@ym...>  20100720 15:18:38

Also, I am trying to do scene recognition specifically, so I am trying to recognize a a scene to tell whether a place is new or already visited. For this I am trying to apply the same process as for face recognition, but if you can help me with anything which is required for this, that will be helpful too. Thanks. ________________________________ From: Garima Garima <garima646@...> To: MDP users mailing list <mdptoolkitusers@...> Sent: Sun, July 18, 2010 5:20:06 PM Subject: Re: [mdptoolkitusers] Problem with recognition using PCANode. Thanks a lot, Olivier. This is really helpful. I could find out the facespace using the node pca. But I still am not very clear about the recognition part, do I have to calculate the weights associated with the training images and test image first or is it calculated by the node? Can you please help me little bit more with how to run the kNN queries. Thanks once again. On Tue, Jul 13, 2010 at 6:38 PM, Olivier Grisel <olivier.grisel@...> wrote: 2010/7/13 Garima Garima <garima646@...>: > Hi everyone, > > I'm trying to apply PCANode of mdp on a database of images for feature > extraction and recognition. And I'm not getting what should I do for > recognition after training the data using "pcanode.train", training with > this method gives me eigenvectors and eigenvalues, but I was wondering is > there any method in PCANode for recognition? > > Can anyone please help me with this? Suppose you have collections of n_faces = 1000 photos of w=64 x h=64 = 4096 pixes pictures of gray level faces. You can find how to do that by having a look at my github account, here: http://github.com/ogrisel/codemaker/tree/master/examples/faces/ (work in progress). Put them in a 2d numpy array named "faces". To compute the eigenfaces you need to take a PCA of the faces data. Let us assume we are interested in the top 100 components (most significant eigenfaces in terms of explained variance): >>> import numpy as np >>> import mdp >>> pca = mdp.PCANode(output_dim=100) >>> pca.train(faces) >>> pca.stop_training() Then consider the eigen values in pca.d, to have an idea of there relative strength, check using pylab: >>> import pylab as pl >>> pl.plot(pca.d) >>> pl.show() >>> pca.get_explained_variance() 0.91742051 The eigenvectors are then the eigeinfaces of your recognition model: >>> eigenfaces = pca.v.T >>> eigenfaces.shape (100, 4096) You can view how they look like with, e.g. for the top 10: >>> for i in range(10): ... pl.subplot(1, 10, i + 1) ... pl.imshow(eigenfaces[i].reshape((64, 64))) Then you can encode all your face database in the eigenfaces space (a projection on the eigen vectors): >>> encoded = pca.execute(faces) And run simple kNN queries using the euclidean distance in the eigenfaces spaces, suppose x is an unseen face (I will take the first element of the database in that example): >>> x = faces[0] >>> code_x = mdp.execute(x.reshape((1, 4096)))[0] The top 5 neirest neighbors are: >>> np.argsort(np.sum((x_encoded  encoded) ** 2, axis=1))[:5] array([ 0, 480, 358, 729, 780]) If you have the labels (the names of the person of the photo) you can make those indices vote and achieve recognition that way. You could also train a multi class classifier such as a SVM with the label data to achieve better predictive accuracy than just the knn query.  Olivier http://twitter.com/ogrisel  http://github.com/ogrisel  This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first http://p.sf.net/sfu/sprintcomfirst _______________________________________________ mdptoolkitusers mailing list mdptoolkitusers@... https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers 
From: Garima Garima <garima646@ym...>  20100718 21:33:56

Thanks a lot, Olivier. This is really helpful. I could find out the facespace using the node pca. But I still am not very clear about the recognition part, do I have to calculate the weights associated with the training images and test image first or is it calculated by the node? Can you please help me little bit more with how to run the kNN queries. Thanks once again. On Tue, Jul 13, 2010 at 6:38 PM, Olivier Grisel <olivier.grisel@...> wrote: 2010/7/13 Garima Garima <garima646@...>: > Hi everyone, > > I'm trying to apply PCANode of mdp on a database of images for feature > extraction and recognition. And I'm not getting what should I do for > recognition after training the data using "pcanode.train", training with > this method gives me eigenvectors and eigenvalues, but I was wondering is > there any method in PCANode for recognition? > > Can anyone please help me with this? Suppose you have collections of n_faces = 1000 photos of w=64 x h=64 = 4096 pixes pictures of gray level faces. You can find how to do that by having a look at my github account, here: http://github.com/ogrisel/codemaker/tree/master/examples/faces/ (work in progress). Put them in a 2d numpy array named "faces". To compute the eigenfaces you need to take a PCA of the faces data. Let us assume we are interested in the top 100 components (most significant eigenfaces in terms of explained variance): >>> import numpy as np >>> import mdp >>> pca = mdp.PCANode(output_dim=100) >>> pca.train(faces) >>> pca.stop_training() Then consider the eigen values in pca.d, to have an idea of there relative strength, check using pylab: >>> import pylab as pl >>> pl.plot(pca.d) >>> pl.show() >>> pca.get_explained_variance() 0.91742051 The eigenvectors are then the eigeinfaces of your recognition model: >>> eigenfaces = pca.v.T >>> eigenfaces.shape (100, 4096) You can view how they look like with, e.g. for the top 10: >>> for i in range(10): ... pl.subplot(1, 10, i + 1) ... pl.imshow(eigenfaces[i].reshape((64, 64))) Then you can encode all your face database in the eigenfaces space (a projection on the eigen vectors): >>> encoded = pca.execute(faces) And run simple kNN queries using the euclidean distance in the eigenfaces spaces, suppose x is an unseen face (I will take the first element of the database in that example): >>> x = faces[0] >>> code_x = mdp.execute(x.reshape((1, 4096)))[0] The top 5 neirest neighbors are: >>> np.argsort(np.sum((x_encoded  encoded) ** 2, axis=1))[:5] array([ 0, 480, 358, 729, 780]) If you have the labels (the names of the person of the photo) you can make those indices vote and achieve recognition that way. You could also train a multi class classifier such as a SVM with the label data to achieve better predictive accuracy than just the knn query.  Olivier http://twitter.com/ogrisel  http://github.com/ogrisel  This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first http://p.sf.net/sfu/sprintcomfirst _______________________________________________ mdptoolkitusers mailing list mdptoolkitusers@... https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers 
From: Pietro Berkes <berkes@br...>  20100715 16:16:41

Dear MDP users, next week we'll have a programming sprint in Berlin in order to prepare the next release of MDP, which will include some important new features and algorithms: https://sourceforge.net/apps/mediawiki/mdptoolkit/index.php?title=MDP_Sprint_2010 . It would be really useful to get some feedback from you, in order to be able to better set our current and future priorities. We would greatly appreciate if you could send us a short message, either on this mailing list or privately. This is also an excellent time to send us algorithms that you have written privately and that you would like to include in MDP! Useful feedback includes:  what kind of task do you use MDP for?  do you typically use only single algorithms (nodes) or entire flows?  is your main concern speed or memory? (or neither)  which algorithms would you like to see in MDP?  which other scientific library do you use (except for numpy and scipy)?  how do you find the documentation and tutorial? If you found MDP useful, please consider dropping us a line, it will help making it better! All the best, the MDP developers 
From: Olivier Grisel <olivier.grisel@en...>  20100715 14:49:18

2010/7/15 Pietro Berkes <berkes@...>: > Hi Oliver! > Definitively blog it first! If you don't mind, though, we might > include it in the tutorial examples for the next version of MDP. Are > you using a publicly available face database, so that we can generate > a few pictures? Yes this is the database known as Labeled Faces in the Wild: http://viswww.cs.umass.edu/lfw/ The preprocessing script I used uses OpenCV and is available here: http://github.com/ogrisel/codemaker/tree/master/examples/faces/ However I am not sure I will find the time to blog soon, so if you want to work on this example for MDP, please feel free to do so without waiting for me :)  Olivier http://twitter.com/ogrisel  http://github.com/ogrisel 
From: Pietro Berkes <berkes@ga...>  20100715 14:07:37

Hi Oliver! Definitively blog it first! If you don't mind, though, we might include it in the tutorial examples for the next version of MDP. Are you using a publicly available face database, so that we can generate a few pictures? P. On Wed, Jul 14, 2010 at 6:28 AM, Olivier Grisel <olivier.grisel@...> wrote: > 2010/7/14 Pietro Berkes <berkes@...>: >> Thank you Olivier, nice tutorial! Maybe we should turn this email into >> one of the MDP examples? It's a classic! Note that MDP now has a kNN >> node, together with several other classifiers. > > I was thinking of blogging on this first but if you want to turn this > mail into a mdp example please feel free to go ahead. > >  > Olivier > http://twitter.com/ogrisel  http://github.com/ogrisel > >  > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > 
From: Olivier Grisel <olivier.grisel@en...>  20100714 10:28:37

2010/7/14 Pietro Berkes <berkes@...>: > Thank you Olivier, nice tutorial! Maybe we should turn this email into > one of the MDP examples? It's a classic! Note that MDP now has a kNN > node, together with several other classifiers. I was thinking of blogging on this first but if you want to turn this mail into a mdp example please feel free to go ahead.  Olivier http://twitter.com/ogrisel  http://github.com/ogrisel 
From: Pietro Berkes <berkes@ga...>  20100714 07:03:02

Thank you Olivier, nice tutorial! Maybe we should turn this email into one of the MDP examples? It's a classic! Note that MDP now has a kNN node, together with several other classifiers. P. On Tue, Jul 13, 2010 at 6:38 PM, Olivier Grisel <olivier.grisel@...> wrote: > 2010/7/13 Garima Garima <garima646@...>: >> Hi everyone, >> >> I'm trying to apply PCANode of mdp on a database of images for feature >> extraction and recognition. And I'm not getting what should I do for >> recognition after training the data using "pcanode.train", training with >> this method gives me eigenvectors and eigenvalues, but I was wondering is >> there any method in PCANode for recognition? >> >> Can anyone please help me with this? > > Suppose you have collections of n_faces = 1000 photos of w=64 x h=64 = > 4096 pixes pictures of gray level faces. You can find how to do that > by having a look at my github account, here: > http://github.com/ogrisel/codemaker/tree/master/examples/faces/ (work > in progress). > > Put them in a 2d numpy array named "faces". > > To compute the eigenfaces you need to take a PCA of the faces data. > Let us assume we are interested in the top 100 components (most > significant eigenfaces in terms of explained variance): > > >>> import numpy as np > >>> import mdp > > >>> pca = mdp.PCANode(output_dim=100) > >>> pca.train(faces) > >>> pca.stop_training() > > Then consider the eigen values in pca.d, to have an idea of there > relative strength, check using pylab: > > >>> import pylab as pl > >>> pl.plot(pca.d) > >>> pl.show() > >>> pca.get_explained_variance() > 0.91742051 > > The eigenvectors are then the eigeinfaces of your recognition model: > >>>> eigenfaces = pca.v.T >>>> eigenfaces.shape > (100, 4096) > > You can view how they look like with, e.g. for the top 10: > > >>> for i in range(10): > ... pl.subplot(1, 10, i + 1) > ... pl.imshow(eigenfaces[i].reshape((64, 64))) > > Then you can encode all your face database in the eigenfaces space (a > projection on the eigen vectors): > > >>> encoded = pca.execute(faces) > > And run simple kNN queries using the euclidean distance in the > eigenfaces spaces, suppose x is an unseen face (I will take the first > element of the database in that example): > > >>> x = faces[0] > >>> code_x = mdp.execute(x.reshape((1, 4096)))[0] > > The top 5 neirest neighbors are: > > >>> np.argsort(np.sum((x_encoded  encoded) ** 2, axis=1))[:5] > array([ 0, 480, 358, 729, 780]) > > If you have the labels (the names of the person of the photo) you can > make those indices vote and achieve recognition that way. You could > also train a multi class classifier such as a SVM with the label data > to achieve better predictive accuracy than just the knn query. > >  > Olivier > http://twitter.com/ogrisel  http://github.com/ogrisel > >  > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first  http://p.sf.net/sfu/sprintcomfirst > _______________________________________________ > mdptoolkitusers mailing list > mdptoolkitusers@... > https://lists.sourceforge.net/lists/listinfo/mdptoolkitusers > > 
From: Olivier Grisel <olivier.grisel@en...>  20100713 22:39:10

2010/7/13 Garima Garima <garima646@...>: > Hi everyone, > > I'm trying to apply PCANode of mdp on a database of images for feature > extraction and recognition. And I'm not getting what should I do for > recognition after training the data using "pcanode.train", training with > this method gives me eigenvectors and eigenvalues, but I was wondering is > there any method in PCANode for recognition? > > Can anyone please help me with this? Suppose you have collections of n_faces = 1000 photos of w=64 x h=64 = 4096 pixes pictures of gray level faces. You can find how to do that by having a look at my github account, here: http://github.com/ogrisel/codemaker/tree/master/examples/faces/ (work in progress). Put them in a 2d numpy array named "faces". To compute the eigenfaces you need to take a PCA of the faces data. Let us assume we are interested in the top 100 components (most significant eigenfaces in terms of explained variance): >>> import numpy as np >>> import mdp >>> pca = mdp.PCANode(output_dim=100) >>> pca.train(faces) >>> pca.stop_training() Then consider the eigen values in pca.d, to have an idea of there relative strength, check using pylab: >>> import pylab as pl >>> pl.plot(pca.d) >>> pl.show() >>> pca.get_explained_variance() 0.91742051 The eigenvectors are then the eigeinfaces of your recognition model: >>> eigenfaces = pca.v.T >>> eigenfaces.shape (100, 4096) You can view how they look like with, e.g. for the top 10: >>> for i in range(10): ... pl.subplot(1, 10, i + 1) ... pl.imshow(eigenfaces[i].reshape((64, 64))) Then you can encode all your face database in the eigenfaces space (a projection on the eigen vectors): >>> encoded = pca.execute(faces) And run simple kNN queries using the euclidean distance in the eigenfaces spaces, suppose x is an unseen face (I will take the first element of the database in that example): >>> x = faces[0] >>> code_x = mdp.execute(x.reshape((1, 4096)))[0] The top 5 neirest neighbors are: >>> np.argsort(np.sum((x_encoded  encoded) ** 2, axis=1))[:5] array([ 0, 480, 358, 729, 780]) If you have the labels (the names of the person of the photo) you can make those indices vote and achieve recognition that way. You could also train a multi class classifier such as a SVM with the label data to achieve better predictive accuracy than just the knn query.  Olivier http://twitter.com/ogrisel  http://github.com/ogrisel 
From: Garima Garima <garima646@ym...>  20100713 15:41:41

Hi everyone, I'm trying to apply PCANode of mdp on a database of images for feature extraction and recognition. And I'm not getting what should I do for recognition after training the data using "pcanode.train", training with this method gives me eigenvectors and eigenvalues, but I was wondering is there any method in PCANode for recognition? Can anyone please help me with this? Thanks in advance. 
From: Tiziano Zito <tiziano.zito@bc...>  20100709 12:45:52

Advanced Scientific Programming in Python ========================================= an Autumn School by the GNode, the Center for Mind/Brain Sciences and the Fondazione Bruno Kessler Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists actually use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques with theoretical lectures and practical exercises tailored to the needs of a programming scientist. New skills will be tested in a real programming project: we will team up to develop an entertaining scientific computer game. We'll use the Python programming language for the entire course. Python works as a simple programming language for beginners, but more importantly, it also works great in scientific simulations and data analysis. Clean language design and easy extensibility are driving Python to become a standard tool for scientific computing. Some of the most useful open source libraries for scientific computing and visualization will be presented. This school is targeted at Postdocs and PhD students from all areas of science. Competence in Python or in another language such as Java, C/C++, MATLAB, or Mathematica is absolutely required. A basic knowledge of the Python language is assumed. Participants without prior experience with Python should work through the proposed introductory materials. Date and Location ================= October 4th—8th, 2010. Trento, Italy. Preliminary Program =================== Day 0 (Mon Oct 4) — Software Carpentry & Advanced Python • Documenting code and using version control • Objectoriented programming, design patterns, and agile programming • Exception handling, lambdas, decorators, context managers, metaclasses Day 1 (Tue Oct 5) — Software Carpentry • Testdriven development, unit testing & Quality Assurance • Debugging, profiling and benchmarking techniques • Data serialization: from pickle to databases Day 2 (Wed Oct 6) — Scientific Tools for Python • Advanced NumPy • The Quest for Speed (intro): Interfacing to C • Programming project Day 3 (Thu Oct 7) — The Quest for Speed • Writing parallel applications in Python • When parallelization does not help: the starving CPUs problem • Programming project Day 4 (Fri Oct 8) — Practical Software Development • Efficient programming in teams • Programming project • The PacMan Tournament Every evening we will have the tutors' consultation hour: Tutors will answer your questions and give suggestions for your own projects Applications ============ You can apply online at http://www.gnode.org/pythonautumnschool Applications must be submitted before August 31th, 2010. Notifications of acceptance will be sent by September 4th, 2010. No fee is charged but participants should take care of travel, living, and accommodation expenses. Candidates will be selected on the basis of their profile. Places are limited: acceptance rate in past editions was around 30%. Prerequisites ============= You are supposed to know the basics of Python to participate in the lectures! Look on the website for a list of introductory material. Faculty ======= • Francesc Alted, author of PyTables, Castelló de la Plana, Spain • Pietro Berkes, Volen Center for Complex Systems, Brandeis University, USA • Valentin Haenel, Berlin Institute of Technology and Bernstein Center for Computational Neuroscience Berlin, Germany • Zbigniew JędrzejewskiSzmek, Faculty of Physics, University of Warsaw, Poland • Eilif Muller, The Blue Brain Project, Ecole Polytechnique Fédérale de Lausanne, Switzerland • Emanuele Olivetti, NeuroInformatics Laboratory, Fondazione Bruno Kessler and University of Trento, Italy • RikeBenjamin Schuppner, Bernstein Center for Computational Neuroscience Berlin, Germany • Bartosz Teleńczuk, Institute for Theoretical Biology, HumboldtUniversität zu Berlin, Germany • Bastian Venthur, Berlin Institute of Technology and Bernstein Focus: Neurotechnology, Germany • Stéfan van der Walt, Applied Mathematics, University of Stellenbosch, South Africa • Tiziano Zito, Berlin Institute of Technology and Bernstein Center for Computational Neuroscience Berlin, Germany Organized by Paolo Avesani for the Center for Mind/Brain Sciences and the Fondazione Bruno Kessler , and by Zbigniew JędrzejewscySzmek and Tiziano Zito for the German Neuroinformatics Node of the INCF. Website: http://www.gnode.org/pythonautumnschool Contact: pythoninfo@... 