help > Assistance in Choosing the Right Model for *Whole-Brain* PPI Analysis in fMRI
Showing 1-1 of 1 posts
Apr 2, 2023 06:04 AM | noamagal
Assistance in Choosing the Right Model for *Whole-Brain* PPI Analysis in fMRI
Dear Prof. Mclaren,
I am in need of some guidance on choosing the right model for PPI analysis in my fMRI project. The aim of this project is to test changes in *whole-brain* functional connectivity patterns ant topological properties across three different groups (group A, B and C) and over three time points.
The task was the Hariri-Hammer Emotional Faces Matching Task. The task comprises 5 blocks of shapes (each 36 seconds long) and 4 blocks of faces (each 48 seconds long) in each session (time point). The blocks are presented in a sequential order, with the shapes block always preceding the faces block. Each face block consists of 6 trials, each presenting a different emotional expression (angry, fearful, neutral, or surprise). There is only one block per expression in each session. The order of the face blocks is different between participants and time points (there are 4 versions of the order).
So far, we have calculated two types of connectivity matrices per time point – one for shapes and one for faces – because we have only one block for each emotional expression. An example of the P structure for the faces condition (for one seed and one time point) is attached. The contrasts were "FacesMinusNone", and the beta weights of the interaction terms were collected from each seed-target pair to create the connectivity matrix. We have done the same for the shapes condition (with the contrast "ShapesMinusNone").
Now, we would like to check the possibility to investigate differences in functional connectivity for each facial expression at each time point. we speculate that there are some emotions that will show greater change in functional connectivity than others. To achieve this, we wish to generate 3(time points)x4(facial expressions) connectivity matrices.
Here are my questions:
what is the best way to do this? Should we build separate PPI models per emotion (using the "emotion(i)MinusNone" contrast, e.g., "surpriseMinusNone"), or should we include the PPI terms in the same model? We are concerned about the power of such a model since we expect the regressors to be similar.
Are there any obvious pitfalls we should be aware of in doing such analysis and given our design?
I would be grateful for any feedback or advice you can provide. Thank you in advance for your time and expertise.
Best regards,
Noa
I am in need of some guidance on choosing the right model for PPI analysis in my fMRI project. The aim of this project is to test changes in *whole-brain* functional connectivity patterns ant topological properties across three different groups (group A, B and C) and over three time points.
The task was the Hariri-Hammer Emotional Faces Matching Task. The task comprises 5 blocks of shapes (each 36 seconds long) and 4 blocks of faces (each 48 seconds long) in each session (time point). The blocks are presented in a sequential order, with the shapes block always preceding the faces block. Each face block consists of 6 trials, each presenting a different emotional expression (angry, fearful, neutral, or surprise). There is only one block per expression in each session. The order of the face blocks is different between participants and time points (there are 4 versions of the order).
So far, we have calculated two types of connectivity matrices per time point – one for shapes and one for faces – because we have only one block for each emotional expression. An example of the P structure for the faces condition (for one seed and one time point) is attached. The contrasts were "FacesMinusNone", and the beta weights of the interaction terms were collected from each seed-target pair to create the connectivity matrix. We have done the same for the shapes condition (with the contrast "ShapesMinusNone").
Now, we would like to check the possibility to investigate differences in functional connectivity for each facial expression at each time point. we speculate that there are some emotions that will show greater change in functional connectivity than others. To achieve this, we wish to generate 3(time points)x4(facial expressions) connectivity matrices.
Here are my questions:
what is the best way to do this? Should we build separate PPI models per emotion (using the "emotion(i)MinusNone" contrast, e.g., "surpriseMinusNone"), or should we include the PPI terms in the same model? We are concerned about the power of such a model since we expect the regressors to be similar.
Are there any obvious pitfalls we should be aware of in doing such analysis and given our design?
I would be grateful for any feedback or advice you can provide. Thank you in advance for your time and expertise.
Best regards,
Noa