help > preprocessing
Showing 1-11 of 11 posts
Display:
Results per page:
Jan 9, 2017  08:01 PM | Jeff Peters
preprocessing
Dear Dr. Spielberg,

Thank you for sharing the toolbox with other users. I am beginning with the preprocessing steps and I have a few questions which I'd appreciate answers to:

1. It appears that the preprocessing here primarily handles motion/artifact correction. Other steps including slice timing, segmentation, co-registration, alignment, and normalization must be done separately and prior to GTG preprocessing. Is this correct? I noticed that there is a slice timing option in the GTG toolbox preprocessing procedure. If I have already done slice timing elsewhere, this step isn't necessary, correct?

2. For each subject, I have 4 separately scanning sessions of the same task. Would you recommend concatenating the sessions or analyzing each session separately? My concern is that individual sessions may not provide sufficient data for a reliable analysis.

3. My functional files aren't named after the participant ID. Will this still work or do I have to rename the files?

Thank you for your help!

JP
Jan 11, 2017  12:01 PM | hnh
RE: preprocessing
Hey Jeff,
I'm also starting to use Dr. Spielbergs toolbox and I'm very curious to know the answer to your second question concerning the scanning sessions, because I have the same issue.

However, I think at least I might be able to help you with the third question:

From what I understand, it is not necessary to rename the functional files as long as the filenames and the file paths are consistent. For example: If you have the functional files for subject 001 stored in a folder located at C:/User/subject001/epis/4D.nii and the files for subject 002 accordingly stored in C:/User/subject002/epis/4D.nii, and so on, you will select the 4D functional file for the first participant and "the toolbox locates the rest of the files by replacing the participant ID".

I hope this helps?

HB
Jan 11, 2017  10:01 PM | Jeffrey Spielberg
RE: preprocessing
Hi Hannah and Jeff,

Sorry about the delay in responding.

Yes, Hannah is correct about question #3 (thanks Hannah!).  The functional files themselves don't need to contain identifiers.  You just need something in the file path to contain an identifier, so that you can provide a unique path for each file.  

Below are answers to the other two questions:

"1. It appears that the preprocessing here primarily handles motion/artifact correction. Other steps including slice timing, segmentation, co-registration, alignment, and normalization must be done separately and prior to GTG preprocessing. Is this correct? I noticed that there is a slice timing option in the GTG toolbox preprocessing procedure. If I have already done slice timing elsewhere, this step isn't necessary, correct?"

GTG does do slice timing correction, but this is the first step, so if you've already done it, you can just input the corrected files.  

GTG does not do segmentation, so you'll need to create white matter/ventricular masks outside the toolbox.  However, I have a script that makes this process fairly straightforward.  I believe I included it in with GTG (in the extras folder), but I've attached it here anyways (including some necessary masks in MNI152 space).  You'll need to have already done the anatomical to standard space registrations, which the script assumes was done with FSL's FNIRT.  So, if you are using SPM or something else, you'll need to use a different method.  

Registration of functional to anatomical and anatomical/functional to standard space must be done outside the toolbox. The toolbox does have the capability to apply transforms/warps (using FSL tools), but I'd recommend doing this outside of the toolbox, as it can get a bit tricky.  Specifically, you want to leave your functional data in native functional space, transform your white matter/ventricular masks to functional space, and warp your ROI map to functional space.  Basically, everything needs to end up in the same space, and I recommend that being native functional space.  


"2. For each subject, I have 4 separately scanning sessions of the same task. Would you recommend concatenating the sessions or analyzing each session separately? My concern is that individual sessions may not provide sufficient data for a reliable analysis."

I would recommend concatenating the sessions, but don't do that until AFTER preprocessing.  Just preprocess each run separately, then concatenate.  I also have a script to make this easy, but it's not commented well.  Let me fix that up a bit and get back to you.  

Best,
Jeff
Attachment: make_masks.zip
Jan 13, 2017  03:01 PM | Jeff Peters
RE: preprocessing
Dear Dr. Spielberg,

Thank you so much for your helpful responses. I do all my preprocessing in SPM though I would like to take advantage of the scrubbing option in your toolbox. Regarding the concatenation, I would be grateful to receive the script for that.

Also, thank-you to Hannah for chiming in to help a toolbox fellow user!

JP
Jan 16, 2017  01:01 PM | hnh
RE: preprocessing
Dear Dr. Spielberg,

thank you very much for your helpful responses and also for the mask script! I also do all the preprocessing in SPM - is there any obstacle using the mask script apart from having to convert the .nii files to .nii.gz files? (Maybe Jeff is referring to this question with his comment on the "scrubbing option", which I don't understand?)
I'd also be very grateful for the concatenation script. I'm glad I could help with the filenaming question.

Best, Hannah
Jan 17, 2017  11:01 PM | Jeffrey Spielberg
RE: preprocessing
Hi Jeff,

Attached is the updated script. 

If you are using SPM, then you should definitely warp the ROI map and ventricular and white matter masks into native functional space before input into the toolbox.  

Best,
Jeff
Attachment: concat_timeseries.m
Jan 17, 2017  11:01 PM | Jeffrey Spielberg
RE: preprocessing
Hi Hannah,

It would be hard to use the mask script, as the script needs FSL-style warp data (needed to warp MNI152-standard-space masks to anatomical space).  You could do the same type of thing as the script.  Basically, you would need to:
1. Segment the anatomical
2. Warp standard masks of ventricles and white matter from MNI152 space to anatomical space
3. Mask the segmentation output from #1 with the warped masks from #2

Best,
Jeff
Jan 19, 2017  08:01 PM | Jeff Peters
RE: preprocessing
Thank you for your help, Dr. Spielberg! This is very helpful!

Best,

JP
Jan 23, 2017  05:01 PM | hnh
RE: preprocessing
Hi Dr. Spielberg,
thank you very much for your help, your time and your script! I really appreciate it.
In this case, I will rather follow your advice by using the FSL processing routines, so I can use the segmentation script after completing the registration in FSL. Thank you.

Unfortunately, I have another – hopefully last – question: Our data are in RAS orientation (right = right), but in your tutorial, you state that the data must be in LAS orientation. Do I need to reverse the data?

Best, Hannah
Jan 26, 2017  01:01 PM | Jeffrey Spielberg
RE: preprocessing
Hi Hannah,

No, the data does not need to be in LAS.  It does need to be consistent across all scans.  FSL recommends having everything in LAS, which is why that's in the toolbox documentation, but it's not a necessity (RAS is fine).  

This is not an issue for you, but it could be problematic if the order of the dimensions differed (e.g., ARS), as as the scripts are written with the assumption that the data is stored in xyz.  

Best,
Jeff
Jan 30, 2017  01:01 PM | hnh
RE: preprocessing
Thank you very much, Dr. Spielberg! I really appreciate your help.

Best, Hannah