[Mrtrix-discussion] Question about filter_tracks command
Robert Smith
r.smith at brain.org.au
Sun Jan 12 17:21:35 PST 2014
Dorian
I attempted to implement a sequence that performed the phase encode
reversal on-the-fly for our Siemens scanners, but was not successful
(reversing the gradients is easy; getting the reconstruction to work is
not). Our simple (but slightly cumbersome) solution is to explicitly
acquire two volumes before the DWIs, and reverse the phase encode direction
of the second volume using the protocol editor at the scanner console.
Instructions are
here<http://www.nitrc.org/pipermail/mrtrix-discussion//2013-March/000649.html>for
Siemens scanners; I've never used Philips before but hopefully a
similar option is available.
Regarding the actual inhomogeneity estimation & correction, I have found
*topup* and *eddy* provided in FSL 5 to work quite well; certainly better
than my own prior attempts at such a method. Unfortunately because it is a
highly generalised approach the command interfaces can be a little daunting
to figure out, but they do have adequate details on the website to figure
it out. The next MRtrix will come with a script that sets up and runs these
commands for a typical use case (two volumes, A>>P then P>>A, followed by
DWIs with phase encoding A>>P), which can easily be modified for other
acquisition strategies, making the whole process much easier.
Rob
--
*Robert Smith, Ph.D*
Research Officer, Imaging Division
The Florey Institute of Neuroscience and Mental Health
Melbourne Brain Centre - Austin Campus
245 Burgundy Street
Heidelberg Vic 3084
Ph: +61 3 9035 7128
Fax: +61 3 9035 7301
www.florey.edu.au
On Mon, Jan 13, 2014 at 11:52 AM, Dorian P. <alb.net at gmail.com> wrote:
> Hi Robert,
>
> Regarding your advise to use the inverse phase method, how hard would it
> be to implement? We use a Philips scanner and the first issue is to create
> the sequence that gets the reverse phase. The second, however, is related
> on how to apply that for correction. I haven't found any clue on the tools
> that can perform the inverse phase correction.
>
> Thank you.
> Dorian
> TJU
>
>
> 2014/1/12 Robert Smith <r.smith at brain.org.au>
>
>> Hi Dan
>>
>> No worries at all; it can take a little while to wrap your head around
>> having things operate in scanner space, most people are used to either
>> voxel coordinates or MNI space; but for multi-modality per-subject data it
>> makes a lot of sense.
>>
>> As long as the seed image 'overlaps' with the diffusion data in scanner
>> space, then yes, a ROI drawn on the anatomical image can be used directly
>> in streamtrack; it doesn't matter if the voxel grids are oriented
>> differently with respect to the scanner axes, or the voxel sizes are
>> different. The overlay tool in MRview is handy for assessing image
>> alignment in such circumstances. I will raise two more points here however:
>>
>> 1. At least some form of image registration is required, to account
>> for inter-sequence subject motion and/or scanner re-calibration. Personally
>> I register the anatomical image to the diffusion images before deriving any
>> other images (e.g. seeding masks) from it; that way, all derived images
>> have the same header transform as the anatomical image, and therefore also
>> align with the diffusion data in scanner space. Alternatively, you can use
>> the transform derived from registration and apply it to the relevant ROI as
>> you have been, but omit the step where the ROI data is re-gridded to match
>> the diffusion images.
>> 2. Due to the presence of susceptibility-induced geometric
>> distortions in EPI, a simple linear transform (whether rigid-body or
>> affine) is generally not adequate to achieve good alignment between the
>> structural and diffusion images. Therefore, although you can in principle
>> use a high-resolution seed image, in reality the anatomical structure you
>> have delineated in the high-resolution image may not map to the correct
>> anatomical location in the diffusion images, unless you have explicitly
>> performed susceptibility distortion correction. Personally I am a big
>> advocate of the reversed phase-encode method for estimating the
>> inhomogeneity field and subsequently correcting for these distortions, and
>> always recommend to people that they should commence acquiring the relevant
>> data in all of their studies.
>>
>> With regards to tracks2prob, as long as the T1 and diffusion image data
>> overlap in scanner space, then again yes, you can use the T1 image as the
>> template image for tracks2prob. You can even use both the -template and
>> -vox options in conjunction to produce a TDI where the voxel grid is
>> aligned with that of the T1 image, but the voxel size is smaller.
>>
>>
>> Happy tracking!
>> Rob
>>
>>
>> --
>>
>> *Robert Smith, Ph.D*
>> Research Officer, Imaging Division
>>
>>
>> The Florey Institute of Neuroscience and Mental Health
>> Melbourne Brain Centre - Austin Campus
>> 245 Burgundy Street
>> Heidelberg Vic 3084
>> Ph: +61 3 9035 7128
>> Fax: +61 3 9035 7301
>> www.florey.edu.au
>>
>>
>> On Mon, Jan 13, 2014 at 6:14 AM, Daniel Lumsden <doclumsden at hotmail.com>wrote:
>>
>>> Rob
>>>
>>> Thank you for a very helpful answer - apologies for slow acknowledgement
>>> and reply!
>>>
>>> Given that mrtrix generates streamlines in "real space", does this mean
>>> that I could create a seed region from an anatomical image converted from
>>> the dicom to nifti using mrconvert, then supply that mask directly to
>>> streamtrack?
>>>
>>> I've been creating mask images from reasonably high resolution T1
>>> images, then using the linear transform generated by flirt in FSL
>>> (transform of structural scan to diffusion scan) to transform these mask
>>> images into the much lower resolution diffusion space.
>>>
>>> If I can supply the seed directly, could I then use my T1 image as the
>>> template for tracks2prob to visualise the track density on the scans
>>> without needing to apply any transformation?
>>>
>>> I hope this isn't a hopelessly naive question!
>>>
>>> Dan
>>>
>>> ------------------------------
>>> Date: Sun, 22 Dec 2013 10:37:52 +1100
>>> Subject: Re: [Mrtrix-discussion] Question about filter_tracks command
>>> From: r.smith at brain.org.au
>>> To: doclumsden at hotmail.com
>>> CC: mrtrix-discussion at www.nitrc.org
>>>
>>>
>>> Hi Dan
>>>
>>> If you use the -template option in tracks2prob with the FA image as the
>>> template, then yes, the transformation (at least the orientation component)
>>> will be identical to the FA image. The offset component of the transform
>>> will differ slightly if the voxel size is different in the TDI.
>>>
>>> This question does however raise an important difference between MRtrix
>>> and other packages. The streamlines are generated and stored in 'real' /
>>> 'scanner' space, rather than with respect to any particular image space. To
>>> read an image value at a particular streamline position, the location is
>>> mapped according to the transform of that particular image. Therefore, it
>>> is not actually necessary for the ROI to be 'in the same space' (i.e. on
>>> the same voxel grid) as the image used to generate the streamlines; as long
>>> as the images are correctly aligned / registered, the streamline position
>>> in scanner space can be transformed to the appropriate location in the
>>> image volume.
>>>
>>> Theoretically, you could have a ROI based on the FA image, a ROI based
>>> on the high-resolution TDI, and a ROI based on a (co-registered) anatomical
>>> image, and they could all be used in the same call to filter_tracks.
>>>
>>> So if the purpose of the TDI is to identify a particular pathway and
>>> draw a ROI, it doesn't actually matter whether or not the -template option
>>> is provided to tracks2prob; the TDI will always be inherently aligned with
>>> the streamlines data, regardless of the alignment of the voxel grid. Though
>>> as an aside, you'll find the TDI file size will be much smaller if the
>>> -template option is *not* used, as it then uses the streamlines data to
>>> determine the required spatial extent of the image, so you don't get as
>>> much 'dead space' around the brain.
>>>
>>> Happy holidays all
>>> Rob
>>>
>>>
>>> --
>>>
>>> *Robert Smith*
>>> Post-Doctoral Researcher, Imaging Division
>>>
>>> The Florey Institute of Neuroscience and Mental Health
>>> Melbourne Brain Centre - Austin Campus
>>> 245 Burgundy Street
>>> Heidelberg Vic 3084
>>> Ph: +61 3 9035 7128
>>> Fax: +61 3 9035 7301
>>> www.florey.edu.au
>>>
>>>
>>> On Sun, Dec 22, 2013 at 4:55 AM, Daniel Lumsden <doclumsden at hotmail.com>wrote:
>>>
>>> Dear All
>>>
>>> I'm interested in using the improved resolution from track density
>>> images to help improve my ROI generation to select tracks from a whole
>>> brain tractography .tck file.
>>>
>>> I can define, e.g., the PLIC much more sharply from the track density
>>> image generated from whole-brain tractography much more easily then from
>>> the standard colour coded FA maps. The orientation of the track density
>>> image depends upon the -template image supplied for tracks2prob step when
>>> generated. If I use the FA maps as the template, does this automatically
>>> put the ROI drawn from the resultant track density maps in diffusion space?
>>> If so I presume I can use these ROI to directly filter the streamlines I'm
>>> interested from the whole brain tractography .tck file without applying any
>>> transformation?
>>>
>>> Many thanks in advance
>>>
>>> Dan
>>>
>>> _______________________________________________
>>> Mrtrix-discussion mailing list
>>> Mrtrix-discussion at www.nitrc.org
>>> http://www.nitrc.org/mailman/listinfo/mrtrix-discussion
>>>
>>>
>>>
>>
>> _______________________________________________
>> Mrtrix-discussion mailing list
>> Mrtrix-discussion at www.nitrc.org
>> http://www.nitrc.org/mailman/listinfo/mrtrix-discussion
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.nitrc.org/pipermail/mrtrix-discussion/attachments/20140113/7427df7f/attachment.html>
More information about the Mrtrix-discussion
mailing list