You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(9) 
_{May}
(18) 
_{Jun}
(20) 
_{Jul}
(25) 
_{Aug}
(16) 
_{Sep}
(8) 
_{Oct}
(15) 
_{Nov}
(15) 
_{Dec}
(4) 

2004 
_{Jan}
(8) 
_{Feb}
(24) 
_{Mar}
(40) 
_{Apr}
(28) 
_{May}
(16) 
_{Jun}
(32) 
_{Jul}
(14) 
_{Aug}
(5) 
_{Sep}
(13) 
_{Oct}
(19) 
_{Nov}
(24) 
_{Dec}
(14) 
2005 
_{Jan}
(24) 
_{Feb}
(38) 
_{Mar}
(23) 
_{Apr}
(16) 
_{May}
(14) 
_{Jun}
(23) 
_{Jul}
(11) 
_{Aug}
(31) 
_{Sep}
(14) 
_{Oct}
(12) 
_{Nov}
(19) 
_{Dec}
(11) 
2006 
_{Jan}
(31) 
_{Feb}
(19) 
_{Mar}
(10) 
_{Apr}
(11) 
_{May}
(18) 
_{Jun}
(6) 
_{Jul}
(24) 
_{Aug}
(25) 
_{Sep}
(18) 
_{Oct}
(14) 
_{Nov}
(5) 
_{Dec}
(7) 
2007 
_{Jan}
(13) 
_{Feb}
(19) 
_{Mar}
(30) 
_{Apr}
(31) 
_{May}
(21) 
_{Jun}
(29) 
_{Jul}
(29) 
_{Aug}
(31) 
_{Sep}
(17) 
_{Oct}
(15) 
_{Nov}
(19) 
_{Dec}
(10) 
2008 
_{Jan}
(3) 
_{Feb}
(5) 
_{Mar}
(7) 
_{Apr}
(13) 
_{May}
(11) 
_{Jun}
(20) 
_{Jul}
(9) 
_{Aug}
(9) 
_{Sep}
(8) 
_{Oct}
(12) 
_{Nov}
(17) 
_{Dec}
(9) 
2009 
_{Jan}
(26) 
_{Feb}
(15) 
_{Mar}
(15) 
_{Apr}
(21) 
_{May}
(8) 
_{Jun}
(12) 
_{Jul}
(28) 
_{Aug}
(12) 
_{Sep}
(12) 
_{Oct}
(14) 
_{Nov}
(4) 
_{Dec}
(7) 
2010 
_{Jan}
(8) 
_{Feb}
(3) 
_{Mar}
(17) 
_{Apr}
(30) 
_{May}
(10) 
_{Jun}
(24) 
_{Jul}
(14) 
_{Aug}
(4) 
_{Sep}
(22) 
_{Oct}
(10) 
_{Nov}
(17) 
_{Dec}
(15) 
2011 
_{Jan}
(33) 
_{Feb}
(26) 
_{Mar}
(21) 
_{Apr}
(27) 
_{May}
(40) 
_{Jun}
(39) 
_{Jul}
(13) 
_{Aug}
(16) 
_{Sep}
(20) 
_{Oct}
(28) 
_{Nov}
(29) 
_{Dec}
(9) 
2012 
_{Jan}
(8) 
_{Feb}
(17) 
_{Mar}
(18) 
_{Apr}
(14) 
_{May}
(9) 
_{Jun}
(21) 
_{Jul}
(2) 
_{Aug}
(7) 
_{Sep}
(16) 
_{Oct}
(6) 
_{Nov}
(29) 
_{Dec}
(19) 
2013 
_{Jan}
(5) 
_{Feb}
(10) 
_{Mar}
(2) 
_{Apr}
(39) 
_{May}
(29) 
_{Jun}
(19) 
_{Jul}
(19) 
_{Aug}
(22) 
_{Sep}
(18) 
_{Oct}
(12) 
_{Nov}
(11) 
_{Dec}
(9) 
2014 
_{Jan}
(24) 
_{Feb}
(15) 
_{Mar}
(27) 
_{Apr}
(18) 
_{May}
(4) 
_{Jun}
(16) 
_{Jul}
(16) 
_{Aug}
(18) 
_{Sep}
(16) 
_{Oct}
(14) 
_{Nov}
(6) 
_{Dec}
(6) 
2015 
_{Jan}
(17) 
_{Feb}
(2) 
_{Mar}
(3) 
_{Apr}
(8) 
_{May}
(11) 
_{Jun}
(1) 
_{Jul}
(16) 
_{Aug}
(2) 
_{Sep}
(7) 
_{Oct}
(5) 
_{Nov}
(20) 
_{Dec}
(9) 
2016 
_{Jan}
(3) 
_{Feb}
(4) 
_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1
(1) 
2

3

4
(2) 
5

6
(2) 
7
(1) 
8

9

10
(3) 
11
(1) 
12
(2) 
13
(1) 
14
(1) 
15

16

17

18

19

20

21
(1) 
22

23

24

25

26

27

28

29

30







From: Jahng GeonHo <ghjahng@ma...>  20031121 17:13:47
Attachments:
Message as HTML
GeonHo Jahng, Ph.D..vcf

From: tulu <tulu@ms...>  20031114 04:30:30
Attachments:
Message as HTML

Dear Marsbar expert, I have a simple question about the extract ROI activity in a group subjects. After select the roi and subject's image, the scaling from have 3 option, subject specific scaling, proportional scaling and raw. What is the difference between subject specific scaling, proportional scaling ? Thank for replay Jung Lung Hsu. M.D. Department of Neurology Shin Kong WHS memorial hospital Taipei, Taiwan mail: tulu@... Fax: 886228344906 
From: Matthew Brett <matthewb@uc...>  20031113 02:06:05

Hi, > 1. Scaling from > *Session specific scaling > Proportional scaling > Raw data > 2. Scale grand mean to ( 0=raw) > 100 > 3. Summary function > *mean > Weighted mean > Median > 1st eigenvector These choices will depend on what you want from the data. I would imagine that you want data that is pretty close to the data that was used for the original model in SPM that gave you the activation. In that case you would choose: Session specific scaling, unless you used the "Global scale" option in SPM, in which case choose global scaling. Scale the grand mean to 100 (this is hardcoded in the SPM fMRI interface) The summary function is matter of taste. I think the mean is the most accessible as a measure. Best, Matthew 
From: Li, Yu <julli@iu...>  20031112 21:46:45

Dear SPMer: We have a fMRI study (boxcar). After done group (pateint/control) = analysis by spm99. We want to get all time series for each ROI that = shows active in group activation map. I am new to Marsbar, have few = simple questions on how to choose following the option on Extract data = (fmri) . 1. Scaling from *Session specific scaling Proportional scaling Raw data 2. Scale grand mean to ( 0=3Draw)=20 100 3. Summary function *mean Weighted mean Median 1st eigenvector Any advice are appreciated. Thanks Julia Li=20 
From: matthewb <matthewb@uc...>  20031112 00:00:09

Hi, >When Scott talked about using the beta weights derived from "parameter >estimates", "effects of interest" for each condition across subjects, I >understood him to mean that, after one has obtained the results from the >contrast one is interested in, one would choose "graph" then "contrast >of parameter estimates" then "effects of interest", which would produce >a bar graph with standard error bars (in the Graphics window) and a >cbeta matrix (in MatLab). The cbeta matrix would contain the values of >each effect in the graph (but not the values for the standard error >bars). The second level analysis is then done by copying the values from >the cbeta matrix, for each subject, into another program like SPSS and >seeing if there was a significant difference across subjects for these >beta values. My question is: if the standard error isn't somehow >included in the SPSS analysis, aren't we likely to get misleading >results? I think the first thing to say is, that MarsBaR uses exactly the same approach as SPM for second level analyses. Taking out the values as you describe above, and putting them into a second level analysis, without the errors, is exactly what SPM does. The con_????.imgs of SPM only contain the cbeta values for each voxel, and do not contain any estimate of the error. The point that Will Penny was trying to make in his email that I pointed you to, was that, if the designs are the same for exery subject, then the results of doing it this way (the SPM way, not explicitly using the first level error variance at the second level) gives the same answer as doing it with the error variances. I hope that's helpful, Matthew 
From: Kristina DoingHarris <kristina.doingharris@cs...>  20031111 18:17:37

Matthew, It is entirely possible that I am not making myself clear or that I have gotten the wrong end of the stick. When Scott talked about using the beta weights derived from "parameter estimates", "effects of interest" for each condition across subjects, I understood him to mean that, after one has obtained the results from the contrast one is interested in, one would choose "graph" then "contrast of parameter estimates" then "effects of interest", which would produce a bar graph with standard error bars (in the Graphics window) and a cbeta matrix (in MatLab). The cbeta matrix would contain the values of each effect in the graph (but not the values for the standard error bars). The second level analysis is then done by copying the values from the cbeta matrix, for each subject, into another program like SPSS and seeing if there was a significant difference across subjects for these beta values. My question is: if the standard error isn't somehow included in the SPSS analysis, aren't we likely to get misleading results? Cheers, Kris _ Kristina DoingHarris, Ph.D. Department of Psychology, University of Utah 380 S. 1530 E., Rm 502, Salt Lake City, UT 84112 ph. 801.518.8636, fax 801.581.5841 kristina.doingharris@... Original Message From: Matthew Brett [mailto:matthewb@...] Sent: Monday, November 10, 2003 12:53 PM To: Kristina DoingHarris Cc: marsbar Subject: RE: [Marsbarusers] group analysis with ROI On Mon, 20031110 at 10:53, Kristina DoingHarris wrote: > Matthew, > > I understand that the variance is in the con. Files, but how does it > come through in the beta weights that Scott talked about: Hi, I am not sure I quite get your question here... but I'll try and answer. The contrast values are just sums of the beta weights. So, if your contrast was [1 0] for a design with a covariate and mean column, then the contrast value will be the same as the beta value for the first column. In SPM, you can see this by applying a [1 0 0 .. ] contrast to some design, and comparing the contrast image generated to the beta_001.img; they will be identical. Maybe that's not what you meant? Best, Matthew 
From: Matthew Brett <matthewb@uc...>  20031110 19:53:36

On Mon, 20031110 at 10:53, Kristina DoingHarris wrote: > Matthew, > > I understand that the variance is in the con. Files, but how does it > come through in the beta weights that Scott talked about: Hi, I am not sure I quite get your question here... but I'll try and answer. The contrast values are just sums of the beta weights. So, if your contrast was [1 0] for a design with a covariate and mean column, then the contrast value will be the same as the beta value for the first column. In SPM, you can see this by applying a [1 0 0 .. ] contrast to some design, and comparing the contrast image generated to the beta_001.img; they will be identical. Maybe that's not what you meant? Best, Matthew 
From: Kristina DoingHarris <kristina.doingharris@cs...>  20031110 18:53:59

Matthew, I understand that the variance is in the con. Files, but how does it come through in the beta weights that Scott talked about: >> I have not be able to find information on how to do a group ROI >> analysis. I have functionally defined ROIs of the left hippocampus in >> multiple subjects, and it is not clear to me how to proceed. I don't >> see anything like con.img files output, which is what one would use in a >> regular group analysis. The study is block design with 5 conditions,516 >> reps (TRs), 1 session. I was considering comparing the beta weights >> derived from the "parameter estimates", "effects of real interest" for >> each condition across subjects, but I am not sure if that is >> appropriate. Any advice or suggestions would be greatly appreciated. Thanks, >> Scott Hayes Cheers, Kris _ Kristina DoingHarris, Ph.D. Department of Psychology, University of Utah 380 S. 1530 E., Rm 502, Salt Lake City, UT 84112 ph. 801.518.8636, fax 801.581.5841 kristina.doingharris@... Original Message From: Matthew Brett [mailto:matthewb@...] Sent: Monday, November 10, 2003 11:04 AM To: Kristina DoingHarris Cc: marsbar Subject: RE: [Marsbarusers] group analysis with ROI Hi, > I have a question about this analysis methodology. Is it not the case > that in doing this the variability encountered in calculating the effect > size values is ignored? Shouldn't the standard error values from the > statistics generated in SPM be somehow included in the SPSS analysis? That's a very good point. The approach taken by SPM, and therefore MarsBaR, is laid out in this email, and attached .ps file by Will Penny: http://www.jiscmail.ac.uk/cgibin/wa.exe?A2=ind0106&L=spm&P=R16328&I=1 Basically, on the assumption that the designs are the same at the first level, the contrast values above give the correct test at the second level. Of course, the designs may not be the same, and, if they are very different, this becomes a significant problem, that has not yet been addressed in el mondo SPM. Best, Matthew 
From: Matthew Brett <matthewb@uc...>  20031110 18:03:58

Hi, > I have a question about this analysis methodology. Is it not the case > that in doing this the variability encountered in calculating the effect > size values is ignored? Shouldn't the standard error values from the > statistics generated in SPM be somehow included in the SPSS analysis? That's a very good point. The approach taken by SPM, and therefore MarsBaR, is laid out in this email, and attached .ps file by Will Penny: http://www.jiscmail.ac.uk/cgibin/wa.exe?A2=ind0106&L=spm&P=R16328&I=1 Basically, on the assumption that the designs are the same at the first level, the contrast values above give the correct test at the second level. Of course, the designs may not be the same, and, if they are very different, this becomes a significant problem, that has not yet been addressed in el mondo SPM. Best, Matthew 
From: Kristina DoingHarris <kristina.doingharris@cs...>  20031107 19:29:53

>> On Tue, 20031104 at 11:09, Scott Hayes wrote: >> Hi, >> I have not be able to find information on how to do a group ROI >> analysis. >Matthew Brett replied: >The simplest way to do group analysis  is rather annoying; that is just >to something like what you suggested, to run the t contrast of interest >for each subject, record the effect size values from the statistic table >command, and put these values, across subjects, into another statistics >package such as SPSS; I hope we will work out a better way in new >releases. I have a question about this analysis methodology. Is it not the case that in doing this the variability encountered in calculating the effect size values is ignored? Shouldn't the standard error values from the statistics generated in SPM be somehow included in the SPSS analysis? Thanks, Kris Kristina DoingHarris, Ph.D. Department of Psychology, University of Utah 380 S. 1530 E., Rm 502, Salt Lake City, UT 84112 ph. 801.518.8636, fax 801.581.5841 kristina.doingharris@... 
From: Matthew Brett <matthewb@uc...>  20031106 19:11:48

On Thu, 20031106 at 07:00, AJ McNamara wrote: > I understand I should be taken outside and statistically shot however could > someone please explain why we get the contradictory P values, and if your > feeling particularly generous what would be a better way to go about reporting > the large disparity in response to conditions between the two groups. Hi, Well, passing up the suggestion at the beginning of the para... Maybe the differences are between the model you've used for the p values and the PS function. If you've used an hrf regressor to do your tests on, then there's a problem, because there are many different ways the PS function could differ from the canonical HRF, and still give you the same estimated coefficient for the HRF regressor. Perhaps you want to use an FIR model instead of the HRF, and do your interference between groups using and F test? Best, Matthew 
From: AJ McNamara <s0197775@sm...>  20031106 15:00:10

I send this to MARSBAR rather than SPM as it may just be a 'you're missing a concept when working with ROI's thing.' I have two groups, one clinical, one control. I understand that the two should not be run in the same matrix as it violates statistical law as they come from two distinct populations however it is done in many cases and for many reasons, we just dont have the numbers to compare at second level. We wanted to know what happens in 10mm radius ROI's for the two groups 4 subs each with 3 sessions per subject. We had two epoch conditions (A and B)and a non coded baseline, so the ROIS were selected and each group was first run seperately on the model and the average PSTH and fitted response plotted for each of the epochs for the ROI in both conditions. Everything very pretty all well and good. The clinical and control groups have completely different time courses for condition A, and similar for condition B  how exciting. We then placed the the two groups into the same design matrix and ran the model contrasting condition A in clinical with condition A in control group and condition B in clinical with condition B in control group. The statistical output in terms of P and P uncorrected values was often completely contradictory to that expected from the graphs taken from the one group models PSTH plots. i.e large differences in the graph yielded low p scores and vica versa. I understand I should be taken outside and statistically shot however could someone please explain why we get the contradictory P values, and if your feeling particularly generous what would be a better way to go about reporting the large disparity in response to conditions between the two groups. Most Sincerely McStats 
From: Matthew Brett <matthewb@uc...>  20031104 22:47:38

On Tue, 20031104 at 11:09, Scott Hayes wrote: > Hi, > I have not be able to find information on how to do a group ROI > analysis. The simplest way to do group analysis  is rather annoying; that is just to something like what you suggested, to run the t contrast of interest for each subject, record the effect size values from the statistic table command, and put these values, across subjects, into another statistics package such as SPSS; I hope we will work out a better way in new releases. To get the exact contrast value, rather than just writing it down from the window, you can use the procedure described here: https://sourceforge.net/mailarchive/message.php?msg_id=4774610 There is no simple way to use the F contrast values in a second level analysis  as I'm sure you know... Best, Matthew 
From: Scott Hayes <smhayes@u.arizona.edu>  20031104 19:02:05

Hi, I have not be able to find information on how to do a group ROI analysis. I have functionally defined ROIs of the left hippocampus in multiple subjects, and it is not clear to me how to proceed. I don't see anything like con.img files output, which is what one would use in a regular group analysis. The study is block design with 5 conditions,516 reps (TRs), 1 session. I was considering comparing the beta weights derived from the "parameter estimates", "effects of real interest" for each condition across subjects, but I am not sure if that is appropriate. Any advice or suggestions would be greatly appreciated. Thanks, Scott Hayes 
From: <dkp@em...>  20031101 00:46:41

Dear Dr Brett, I have done a couple of experiments: 1) I took a copy of the troublesome exam directory I'd moved from another machine, and I reran the model design, analysis, and the contrasts on my XP machine. Marsbar indeed plays well with this directory now on the XP machine. 2) I took another copy of the troublesome directory (a copy for which the Stats and Contrasts had been run on another machine and NOT rerun on this XP machine). For this test, I tried to apply the marsbar "Change Path to Images" tool instead of rerunning the analyses. Design>Change Path to Images Analysis to Change Paths: SPM.mat New Directory Root for Files: C:\Scott\3catprod\e3471bw\RESULTS\Stat1 Now I tried running Estimate ROIs and got the following error, which I get repeatedly as I try different alternatives: C:\Scott\3catprod\e3471bw SPM99: spm_results_ui (v2.31) 17:18:24  31/10/2003 ======================================================================== SPM99: spm_getspm (v2.37) 17:18:24  31/10/2003  SPM computation : ...initialising ...computing ...done contrast structure : ...saved to xCon.mat MarsBaR analysis function prepended to path MarsBaR defaults loaded from base defaults Fetching data... 1/117 ??? Cant open image file. Error in ==> C:\Matlab\spm99\spm_sample_vol.dll Error in ==> C:\Matlab\spm99\toolbox\marsbar\@maroi\getdata.m On line 94 ==> data = spm_sample_vol(data_imgs(i),... Error in ==> C:\Matlab\spm99\toolbox\marsbar\mars_roidata.m On line 60 ==> [y vals] = getdata(o, VY); Error in ==> C:\Matlab\spm99\toolbox\marsbar\spmrep\spm_spm.m On line 36 ==> marsY = mars_roidata(roilist, VY, mars.statistics.sumfunc, 'v'); Error in ==> C:\Matlab\spm99\toolbox\marsbar\marsbar.m On line 296 ==> spm_spm(tmp.VY,tmp.xX,tmp.xM,tmp.F_iX0,Sess,xsDes); Error in ==> C:\Matlab\spm99\spm.m On line 1182 ==> evalin('base',CBs{v1}) ??? Error while evaluating uicontrol Callback. ____________________________________________________ Next I changed the path for SPMcfg.mat Same error message ____________________________________________________ Changed path for both of them (SPM.mat and SPMcfg.mat), setting the Directory Root to C:\Scott\3catprod\e3471bw (instead of C:\Scott\3catprod\e3471bw\RESULTS\Stat1) Same error _____________________________________________________ Clearly I don't understand what the Change Paths tool really needs to do its job. I'd rather use your tool than rerun all the analyses...can you point me in the right direction? Thankyou again. I really appreciate all of your help. Have a great Halloween. Dianne Dianne K. Patterson Ph.D. Psychology, Room 217E 6264571 Cognition and NeuroImaging Labs University of Arizona Tucson, AZ 