<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:tahoma,new york,times,serif;font-size:10pt">Hi donald, <br><br>Greetings :) <br><br>just a question on same command. 'track2prob'<br><br>tracks2prob [ options ] tracks reference/template output <br><br>template / reference ? can it be any kind of image for example.. if I put MNI template, will it transform in that space ? <br><br>or should I use dwi.mif as reference or b0 as reference, which is similar space like track. and than use calculated transformation matrix to transform the track in other space ? <br><br>thank you , <br>vinod<div style="color: rgb(0, 96, 191);"><div><font size="2"><span style="color: rgb(127, 0, 63);"><br></span></font><strong><b><i><font color="green" face="Webdings" size="5"><span style="font-size: 18pt; font-family: Webdings; color: green; font-style:
italic;"></span></font></i></b></strong><br></div></div><div><br></div><div style="font-family: tahoma,new york,times,serif; font-size: 10pt;"><br><div style="font-family: arial,helvetica,sans-serif; font-size: 13px;"><font face="Tahoma" size="2"><hr size="1"><b><span style="font-weight: bold;">From:</span></b> Donald Tournier <d.tournier@brain.org.au><br><b><span style="font-weight: bold;">To:</span></b> Wim Otte <wim@invivonmr.uu.nl><br><b><span style="font-weight: bold;">Cc:</span></b><span> mrtrix mailinglist <mrtrix-discussion@><a target="_blank" href="http://www.nitrc.org">www.nitrc.org</a>></mrtrix-discussion@></span><br><b><span style="font-weight: bold;">Sent:</span></b> Wednesday, April 8, 2009 3:11:06 AM<br><b><span style="font-weight: bold;">Subject:</span></b> Re: [Mrtrix-discussion] Crossing-fibers gray matter CSD<br></font><br>
Hi Wim,<br><br>Yes, in theory, tracks2prob should just copy the layout from the<br>reference image. But it so happens that the NIfTI image handling<br>routine overrides the layout and sets it to +0,+1,+2. There is no<br>point in trying to use mrconvert to change the layout for a NIfTI<br>image, it will always end up with the same result. What you could do<br>is mrconvert the reference.nii image, and then the layouts will both<br>be +0,+1,+2. Does that sound like a workable solution?<br><br>I might try to make the NIfTI handler honour the layout specification,<br>but that won't be ready for some time...<br><br>Regards,<br><br>Donald.<br><br><br>On Tue, Apr 7, 2009 at 5:47 PM, Wim Otte <<a ymailto="mailto:wim@invivonmr.uu.nl" href="mailto:wim@invivonmr.uu.nl">wim@invivonmr.uu.nl</a>> wrote:<br>> Hi Donald,<br>><br>> Thank you very much for your clear tracking explanations!<br>><br>> The layouts of the reference image and output are
indeed different(see<br>> output below); but shouldn't tracks2prob 'copy' the data layout from<br>> the reference?<br>> I'll try mrconvert with the layout option; it's not a real problem,<br>> It's just that I am scared to flip left and right halfway the<br>> post-processing...<br>> Thanks again!<br>><br>> Wim Otte<br>><br>><br>> (Data layout: [ -0 -1 +2 ] becomes Data layout: [ +0 +1 +2 ]).<br>><br>><br>> tracks2prob M1.tck reference.nii output.nii<br>><br>> mrinfo reference.nii results in:<br>> ************************************************<br>> Image: "reference.nii"<br>> ************************************************<br>> Format: NIfTI-1.1<br>> Dimensions: 64 x 25 x 64<br>> Voxel size: 5 x 5 x 5<br>>
Dimension labels: 0. left->right (mm)<br>> 1. posterior->anterior (mm)<br>> 2. inferior->superior (mm)<br>> Data type: 32 bit float (little endian)<br>> Data layout: [ -0 -1 +2 ]<br>> Data scaling: offset = 0, multiplier = 1<br>> Comments: (none)<br>> Transform: 1 -0 0 -299<br>> -0 1 0
-116.1<br>> -0 -0 1 -16<br>> 0 0 0 1<br>><br>> mrinfo output.nii results in:<br>> ************************************************<br>> Image: "output.nii"<br>> ************************************************<br>> Format: NIfTI-1.1<br>> Dimensions: 64 x 25 x 64<br>> Voxel size: 5 x 5 x 5<br>> Dimension labels: 0. left->right
(mm)<br>> 1. posterior->anterior (mm)<br>> 2. inferior->superior (mm)<br>> Data type: 32 bit float (little endian)<br>> Data layout: [ +0 +1 +2 ]<br>> Data scaling: offset = 0, multiplier = 1<br>> Comments: track fraction map<br>> count: 5000; init_threshold: 0.2; lmax: 8;<br>> max_dist: 200; max_num_tracks: 5000<br>> Transform: 1 -0 0 -299<br>>
-0 1 0 -116.2<br>> -0 -0 1 -16<br>> 0 0 0 1<br>><br>><br>><br>><br>> On 4/7/09, Donald Tournier <<a ymailto="mailto:d.tournier@brain.org.au" href="mailto:d.tournier@brain.org.au">d.tournier@brain.org.au</a>> wrote:<br>>> Hi Wim,<br>>><br>>><br>>> > - In which voxels do I have to estimate the fiber response function?<br>>> > In all brain-voxels
(resulting in a less 'flat' response function) or<br>>> > in the white matter voxels? As we need it to track in both white<br>>> > matter and gray matter voxels.<br>>><br>>><br>>> This is an interesting question, and not one I have a satisfactory<br>>> answer to. Mind you, I don't think anyone has a good answer to this.<br>>> The few people who have suggested the possibility of tracking in grey<br>>> matter (I can only think of Van Wedeen, and maybe indirectly Tim<br>>> Behrens for deep grey matter structures) have both used methods that<br>>> don't need an explicit response function (diffusion spectrum imaging<br>>> and the diffusion tensor, respectively).<br>>><br>>> Thankfully, CSD is not overly sensitive to inaccuracies in the<br>>> response function. I'd recommend you go for the 'flattest'
response<br>>> function you can get, since this will always produce sharper<br>>> directions. This means that you should opt for the white matter<br>>> response function. Besides, that is your only real option anyway,<br>>> since it won't be possible to isolate even a single voxel within the<br>>> grey matter that contains a single coherent fibre direction, from<br>>> which you might have been able to estimate a response function...<br>>><br>>><br>>><br>>> > - How does streamtrack determine the principal tracking direction(s)<br>>> > from the spherical decomposition data? Is it using a 'find_SH_peaks'<br>>> > internally? Does it take crossing-fibers into account, or only the<br>>> > first direction?<br>>><br>>><br>>> There are various algorithms within streamtrack, but all the SD
based<br>>> ones will take crossing fibres into account. The SD_STREAM option will<br>>> do find the closest peak to the current direction of tracking, and use<br>>> its direction for the next step. The SD_PROB option on the other hand,<br>>> randomly samples a direction from the current distribution of fibre<br>>> orientations (the FOD), within a 'cone' about the current direction of<br>>> tracking. The angle of the cone is determined from the curvature<br>>> constraint, and is given by phi = 2 asin (s/2R), where s is the step<br>>> size and R is the radius of curvature. This means both algorithms will<br>>> preferentially track through crossing fibre regions, providing of<br>>> course that there is a fibre orientation in that direction.<br>>><br>>><br>>><br>>> > - tracks2prob flips my tracks in the x
and y plane ( I have to<br>>> > 'correct' with fslswapdim -x -y z <input> <output> to get it right<br>>> > with the 'reference image'). The fibers are oriented correctly in<br>>> > mrview. Am I doing something wrong? (q-form, s-form thing?).<br>>><br>>><br>>> I'm surprised by this. Can you provide more details on what you mean?<br>>> Is the resulting image flipped within MRView, or within FSL? As far as<br>>> I can tell, FSLview is not very flexible when it comes to data<br>>> ordering, and will for example display images acquired in the sagittal<br>>> plane 'as if' they had been acquired axial, which will of course look<br>>> wrong (although the L-R, A-P, and I-S orientation labels are in the<br>>> correct places). I'd imagine that this would also mean that overlaying<br>>> two images where the
data are ordered differently will produce<br>>> artefacts like you mention.<br>>><br>>> If the problem is with FSL's display, then you may have no other<br>>> option than what you've already done (although mrconvert does have a<br>>> "-layout" option that could be used to the same effect, and probably<br>>> more robustly). If the problem is with MRView, let me know and I will<br>>> investigate further.<br>>><br>>> Hope this helps.<br>>> Cheers!<br>>><br>>> Donald.<br>>><br>>><br>>><br>>> --<br>>> Jacques-Donald Tournier (PhD)<br>>> Brain Research Institute, Melbourne, Australia<br>>> Tel: +61 (0)3 9496 4078<br>>><br>><br><br><br><br>-- <br>Jacques-Donald Tournier (PhD)<br>Brain Research Institute, Melbourne, Australia<br>Tel: +61 (0)3 9496
4078<br>_______________________________________________<br>Mrtrix-discussion mailing list<br><a ymailto="mailto:Mrtrix-discussion@www.nitrc.org" href="mailto:Mrtrix-discussion@www.nitrc.org">Mrtrix-discussion@www.nitrc.org</a><br><span><a target="_blank" href="http://www.nitrc.org/mailman/listinfo/mrtrix-discussion">http://www.nitrc.org/mailman/listinfo/mrtrix-discussion</a></span><br></div></div></div><br>
</body></html>