Hi,
I'm coregistering two images. The moving is in LIP coordinate and the fixed is in LPI coordinate (FSL MNI template). The coregistration works great, but I cannot get antsApplyRegistrationToPoints to do the correct conversion. I am applying LIP->LPS transformation to coordinates before feeding them to antsApplyRegistrationToPoints, but this still does not solve the problem. I documented all the steps in the following notebook: http://nbviewer.ipython.org/urls/dl.dropbox.com/s/nxg52w0oqz4k6go/antsApplyTransformationToPoints%20and%20LPS.ipynb?create=1
Hi,
I'm coregistering two images. The moving is in LIP coordinate and the
fixed is in LPI coordinate (FSL MNI template). The coregistration works
great, but I cannot get antsApplyRegistrationToPoints to do the correct
conversion. I am applying LIP->LPS transformation to coordinates before
feeding them to antsApplyRegistrationToPoints, but this still does not
solve the problem. I documented all the steps in the following notebook:
Thanks in advance - any help will be very appreciated. I did read the discussion you mentioned - hence the LIP->LPS conversion and application of reverse transform.
What are the "debugging" options you mention?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks in advance - any help will be very appreciated. I did read the
discussion you mentioned - hence the LIP->LPS conversion and application of
reverse transform.
Hi,
I've followed your recommendation and converted both fixed and moving images to LPS before doing the coregistration and applying the transformation to point coordinates. Unfortunately I still am getting results quite far from what I was expecting. I don't think my problems are related to image orientation since I am using LPS what is the orientation that ants is assuming.
Hi,
I've followed your recommendation and converted both fixed and moving
images to LPS before doing the coregistration and applying the
transformation to point coordinates. Unfortunately I still am getting
results quite far from what I was expecting. I don't think my problems are
related to image orientation since I am using LPS what is the orientation
that ants is assuming.
Here is the notebook replicating this problem step by step:
(1) the way you generate the input point csv files is funky ... see chicken
(2) something funny about fsl +/- the viewer you are using relative to itk.
itk snap shows what itk sees. e.g. itk ignores either qform or sform, i
forget which. also, anything fsl adds will probably be ignored.
my apologies but i dont have bandwidth to look into details on this.
but i can tell you that itk point transformation and itk image
transformation
uses the same architecture.
i would see if you can reproduce the chicken example
brian
On Mon, Jan 27, 2014 at 1:56 PM, Chris Filo Gorgolewski
gorgolewski@users.sf.net wrote:
Hi,
I've followed your recommendation and converted both fixed and moving
images to LPS before doing the coregistration and applying the
transformation to point coordinates. Unfortunately I still am getting
results quite far from what I was expecting. I don't think my problems are
related to image orientation since I am using LPS what is the orientation
that ants is assuming.
Here is the notebook replicating this problem step by step:
Oh my crusade is finally over! Let me record for the posterity how I managed to do this.
Basically there are many ways to represent images inside a nifti file. There are also many terms that mean different things in different communities (for example what LPS in ITK world is RAI in nipy, FreeSurfer world). Therefore the most effective way to figure out how to convert your coordinates to what antsApplyTransformsToPoints expects is by a test case. Here are the steps:
Create a fake label volume based on your moving image. You can use ITK Snap or FSLView to do this. You basically need one point set to 1 and all others set to zero. Write down the coordinates (in mm) of that point.
Use ImageMaths LabelStats on the label volume to figure out what are the coordinates of that point according to ANTs.
Comparing the coordinates from step 1 and 2 you can figure out what conversion (swapping order, multiplying by -1) you need to apply to your list of points before feeding them to antsApplyTransformsToPoints
You probably will have to go through a similar process to convert coordinates outputed by antsApplyTransformsToPoints to something compatible with the space your fixed image is in.
I hope this helps!
Best,
Chris
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
I'm coregistering two images. The moving is in LIP coordinate and the fixed is in LPI coordinate (FSL MNI template). The coregistration works great, but I cannot get antsApplyRegistrationToPoints to do the correct conversion. I am applying LIP->LPS transformation to coordinates before feeding them to antsApplyRegistrationToPoints, but this still does not solve the problem. I documented all the steps in the following notebook:
http://nbviewer.ipython.org/urls/dl.dropbox.com/s/nxg52w0oqz4k6go/antsApplyTransformationToPoints%20and%20LPS.ipynb?create=1
the moving file can be found here: http://human.brain-map.org/api/v2/well_known_file_download/157722290
Any help would be very very much appreciated.
Best,
Chris
did you look over this discussion yet?
http://sourceforge.net/p/advants/discussion/840260/thread/ff4587ef/?limit=25#c007
seems like you did given that your script looks ok.
i dont see any obvious problems/solutions. will let you know if i think of
something other than obvious "debugging" options.
brian
On Fri, Jan 24, 2014 at 11:49 AM, Chris Filo Gorgolewski gorgolewski@users.sf.net wrote:
Thanks in advance - any help will be very appreciated. I did read the discussion you mentioned - hence the LIP->LPS conversion and application of reverse transform.
What are the "debugging" options you mention?
re: debugging.
you might transform the images - before landmarking - to LPS space.
then identify the point coordinate in this new space.
this can help you understand the transformation that needs to be made.
brian
On Sat, Jan 25, 2014 at 5:30 AM, Chris Filo Gorgolewski gorgolewski@users.sf.net wrote:
Hi,
I've followed your recommendation and converted both fixed and moving images to LPS before doing the coregistration and applying the transformation to point coordinates. Unfortunately I still am getting results quite far from what I was expecting. I don't think my problems are related to image orientation since I am using LPS what is the orientation that ants is assuming.
Here is the notebook replicating this problem step by step:
http://nbviewer.ipython.org/urls/dl.dropbox.com/sh/3nrrqixjvo8ku8u/i7VJcK9z_y/ants_example/antsApplyTransformationToPoints%20and%20LPS%20part%202.ipynb?create=1
Here are all the input and output files: https://www.dropbox.com/sh/3nrrqixjvo8ku8u/2HMqrpyu4-/ants_example
Please let me know if there is anything I can do to help geting to the bottom of this.
Best,
Chris
chris
my apologies but i dont have bandwidth to look into details on this.
but i can tell you that itk point transformation and itk image
transformation
uses the same architecture.
i would see if you can reproduce the chicken example
brian
On Mon, Jan 27, 2014 at 1:56 PM, Chris Filo Gorgolewski gorgolewski@users.sf.net wrote:
sorry - pressed send too soon.
http://stnava.github.io/chicken/
i'd just guess that either
(1) the way you generate the input point csv files is funky ... see chicken
(2) something funny about fsl +/- the viewer you are using relative to itk.
itk snap shows what itk sees. e.g. itk ignores either qform or sform, i
forget which. also, anything fsl adds will probably be ignored.
brian
On Mon, Jan 27, 2014 at 2:08 PM, stnava stnava@users.sf.net wrote:
Oh my crusade is finally over! Let me record for the posterity how I managed to do this.
Basically there are many ways to represent images inside a nifti file. There are also many terms that mean different things in different communities (for example what LPS in ITK world is RAI in nipy, FreeSurfer world). Therefore the most effective way to figure out how to convert your coordinates to what antsApplyTransformsToPoints expects is by a test case. Here are the steps:
You probably will have to go through a similar process to convert coordinates outputed by antsApplyTransformsToPoints to something compatible with the space your fixed image is in.
I hope this helps!
Best,
Chris