Hello Lydia,
I'm trying to run a jackknife sensitivity analysis by implementing multiple linear models, as per your suggestion above. I've set up dummy variables for each article, where all articles are coded as 1 except for the excluded study, which is coded as 0.
However, I have some questions
regarding how to properly run and interpret the analysis:
1. Should I define a model and hypothesis, or use these variables
as filters in the linear model?
- I tried running the analysis by selecting the variable (e.g. study_1_excluded) as a model predictor and setting the hypothesis to 1 (as per the GUI baseline recommendation). However, this resulted in an error: "ERROR: matrix is singular Default GSL error handler invoked"
- If this error is due to the lack of variance in the dummy variable (since only one study differs) how should I run the Leave-one-out analysis?
- Should I instead use the filter variable option in the GUI instead?
2. Could you please elaborate on how the linear model is calculated in this specific case?
- Since only one study has a different value, does the model still perform the regression at each voxel?
- How does SDM calculate the voxel-wise slope (β1eta_1) when one group consists of just one study?
3. Interpreting the Results:
- What does a significant β1eta_1 coefficient indicate in this context?
- If a voxel has a large β1eta_1, should I conclude that the omitted study strongly influenced that region?
- Would it be better to compare the intercept maps (across different LOO runs) instead of focusing on the slopes?
I appreciate any guidance you can provide on best practices for running LOO sensitivity analysis using meta-regression in SDM.
I used the SDM Linux version:
v6.21
Thank you so much!
Threaded View
Title | Author | Date |
---|---|---|
Jonah Shepherd | Jul 13, 2023 | |
Lydia Fortea | Jul 26, 2023 | |
Bryn Evohr | Jan 16, 2025 | |
Lydia Fortea | Feb 17, 2025 | |
Orsi Lanyi | Mar 7, 2025 | |