help
help > RE: regression as anova
Jul 9, 2015 01:07 AM | Alfonso Nieto-Castanon - Boston University
RE: regression as anova
Dear Liron,
The short answer is to use the following model: select 'Group1', 'Group2', 'Group3', 'Emotion', 'Age', 'Gender', 'Meds1', and 'Meds2' and the use:
a) contrast [1 -1 0 0 0 0 0 0; 0 1 -1 0 0 0 0 0] to test for Group effects while controlling for all other effects (Emotion & covariates)
b) contrast [0 0 0 1 0 0 0 0] to test for Emotion effects while controlling for all other effects (Group & covariates)
The latter model will report T- statistics, if you wish to report an F- statistic instead you can: 1) simply use the formula F=T^2 to get the associated F values (the associated p-value of the F-stat will be equal to the p-value of the two-sided T-stat, and the associated degrees of freedom will be [1,dof] where dof is the degrees of freedom reported by the T-stat); or 2) use the [0 0 0 1 0 0 0 0; 0 0 0 -1 0 0 0 0] contrast you mentioned to get the F-stats directly. These two methods (1&2) are exactly equivalent (you will get exactly the same F/p/dof values).
Note, nevertheless, that you should not simply compare those F-stats resulting from (a) and (b) to infer whether group or emotion has more predictive power of brain connectivity. First, those F-stats have different degrees of freedom, and consequently different null-hypothesis distributions (the (a) test results in a F(2,dof) distribution while the (b) test results in a F(1,dof) distribution; note that even if you use the [0 0 0 1 0 0 0 0; 0 0 0 -1 0 0 0 0] contrast in (b), which has two rows like the (a) case, the stats will still be F(1,dof) instead of F(2,dof) because those two rows in the (b) case are not independent). Second, those F-stats do not really reflect the predictive power of the group variables (test a) and emotion variables (test b), but rather the additional predictive power resulting from adding the group variables over a model that uses emotion and control variables alone (test a), and the additional predictive power resulting from adding the emotion variable over a model that uses group and control variables alone (test b). And third, even if the F-stats had the same dof's and even if they reflected comparable predictive power measures, you would still have to demonstrate that the difference between those F-values is above chance-levels to infer that one model is "more predictive" than the other. If you really need to answer that question (whether emotion of group is more predictive of functional connectivity) I would suggest instead to use a bootstrap + cross-validation method to address that. Briefly, you use bootstrap to resample your subjects (with replacement), then use leave-one-out cross-validation to test the predictive power of the two predictive models that you want to compare (e.g. one that includes group & covariates, and another that includes emotion & covariates), and finally repeat the original resampling step enough times to build a distribution of the predictive power difference between the two models. Let me know if you would like me to further clarify how one would go about doing this.
Hope this helps
Alfonso
ps. I realize I may have not really answered your original question, so here is the actual answer: to test the regression portion of an ancova model you are right, just need to enter a contrast [0 0 0 1 0 0 0 0] (having a one in the covariate-of-interest effect and 0's everywhere else). And yes, using a [0 0 0 1 0 0 0 0; 0 0 0 -1 0 0 0 0] contrast is simply exactly equivalent to testing the two-sided effect in the original [0 0 0 1 0 0 0 0] contrast (but you get F-stats instead of T-stats, this is exactly equivalent since a F(1,dof) distribution is just the same as a T(dof)^2 distribution). Hope this helps!
Originally posted by L R:
The short answer is to use the following model: select 'Group1', 'Group2', 'Group3', 'Emotion', 'Age', 'Gender', 'Meds1', and 'Meds2' and the use:
a) contrast [1 -1 0 0 0 0 0 0; 0 1 -1 0 0 0 0 0] to test for Group effects while controlling for all other effects (Emotion & covariates)
b) contrast [0 0 0 1 0 0 0 0] to test for Emotion effects while controlling for all other effects (Group & covariates)
The latter model will report T- statistics, if you wish to report an F- statistic instead you can: 1) simply use the formula F=T^2 to get the associated F values (the associated p-value of the F-stat will be equal to the p-value of the two-sided T-stat, and the associated degrees of freedom will be [1,dof] where dof is the degrees of freedom reported by the T-stat); or 2) use the [0 0 0 1 0 0 0 0; 0 0 0 -1 0 0 0 0] contrast you mentioned to get the F-stats directly. These two methods (1&2) are exactly equivalent (you will get exactly the same F/p/dof values).
Note, nevertheless, that you should not simply compare those F-stats resulting from (a) and (b) to infer whether group or emotion has more predictive power of brain connectivity. First, those F-stats have different degrees of freedom, and consequently different null-hypothesis distributions (the (a) test results in a F(2,dof) distribution while the (b) test results in a F(1,dof) distribution; note that even if you use the [0 0 0 1 0 0 0 0; 0 0 0 -1 0 0 0 0] contrast in (b), which has two rows like the (a) case, the stats will still be F(1,dof) instead of F(2,dof) because those two rows in the (b) case are not independent). Second, those F-stats do not really reflect the predictive power of the group variables (test a) and emotion variables (test b), but rather the additional predictive power resulting from adding the group variables over a model that uses emotion and control variables alone (test a), and the additional predictive power resulting from adding the emotion variable over a model that uses group and control variables alone (test b). And third, even if the F-stats had the same dof's and even if they reflected comparable predictive power measures, you would still have to demonstrate that the difference between those F-values is above chance-levels to infer that one model is "more predictive" than the other. If you really need to answer that question (whether emotion of group is more predictive of functional connectivity) I would suggest instead to use a bootstrap + cross-validation method to address that. Briefly, you use bootstrap to resample your subjects (with replacement), then use leave-one-out cross-validation to test the predictive power of the two predictive models that you want to compare (e.g. one that includes group & covariates, and another that includes emotion & covariates), and finally repeat the original resampling step enough times to build a distribution of the predictive power difference between the two models. Let me know if you would like me to further clarify how one would go about doing this.
Hope this helps
Alfonso
ps. I realize I may have not really answered your original question, so here is the actual answer: to test the regression portion of an ancova model you are right, just need to enter a contrast [0 0 0 1 0 0 0 0] (having a one in the covariate-of-interest effect and 0's everywhere else). And yes, using a [0 0 0 1 0 0 0 0; 0 0 0 -1 0 0 0 0] contrast is simply exactly equivalent to testing the two-sided effect in the original [0 0 0 1 0 0 0 0] contrast (but you get F-stats instead of T-stats, this is exactly equivalent since a F(1,dof) distribution is just the same as a T(dof)^2 distribution). Hope this helps!
Originally posted by L R:
Dear Alfonso
I apologize in advance, the background is long and I hope you would manage to understand what I tried to convey...
question: How to model the regression that was specified by the contrast [1 0 0 0 0 0 0], using an ancova model?
Does [1 0 0 0 0 0 0; -1 0 0 0 0 0 0] actually do a regression? (it gives F values and not T)
background: My model consists of 3 groups (group1, group2, group3), one covariate of interest (emotion) and 4 covariates of no-interest (age, gender, meds1, meds2 ). I wanted to check which was more predictive of brain connectivity: group or emotion (after controlling for the 4 covariates of no-interest).
For that purpose, I originally performed two analyses to identify the roi-pairs that were significant for each model: one was an ancova comparing the 3 groups while controlling for emotion+ the 4 covariates of no-interest [1 0 -1 0 0 0 0 0; -1 1 0 0 0 0 0 0]. The second was a regression, controlling for group + the 4 covariates of no-interest [1 0 0 0 0 0]. I replicated my results in spss and thus far everything was great.
I then learned that I should use only one of the methods (i.e. regression or anova including all of my variables), because they are statistically the same (and indeed I got comparable results using the two aforementioned contrasts). So far so good. However,I still need to be able to identify which pairs are significant for emotion and which for group. I tried to go back and do an ancova instead of the regression (which should give the same results). I was able to get the same results as the regression gave using [1 0 0 0 0 0 0; -1 0 0 0 0 0 0], but I'm not sure that actually constitutes an ancova (?).
The only other way I can think of, is by splitting the emotion covariate into 3 covariates (i.e. emotion group1, emotion group2, emotion group3; using zeros to replace values of subjects not included in the covariate), and using [1 0 -1; -1 1 0] to contrast them. The whole contrast would than use 0 0 0 to control for the 3 groups (group1 group2, group3; coded as before) and 0 0 0 0 to control for the 4 covariates of no-interest. The entire contrast would read [0 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 -1 1 0]. However, that didn't give me the expected results. O also tried to control for group using a single covariate (coded with 1,2,3), but that didn't solve it either. Any suggestion you may have as to how to solve this would be greatly appreciated!!
Sorry again for the long question, I hope it made sense...
Thank you very much!!
Liron.
I apologize in advance, the background is long and I hope you would manage to understand what I tried to convey...
question: How to model the regression that was specified by the contrast [1 0 0 0 0 0 0], using an ancova model?
Does [1 0 0 0 0 0 0; -1 0 0 0 0 0 0] actually do a regression? (it gives F values and not T)
background: My model consists of 3 groups (group1, group2, group3), one covariate of interest (emotion) and 4 covariates of no-interest (age, gender, meds1, meds2 ). I wanted to check which was more predictive of brain connectivity: group or emotion (after controlling for the 4 covariates of no-interest).
For that purpose, I originally performed two analyses to identify the roi-pairs that were significant for each model: one was an ancova comparing the 3 groups while controlling for emotion+ the 4 covariates of no-interest [1 0 -1 0 0 0 0 0; -1 1 0 0 0 0 0 0]. The second was a regression, controlling for group + the 4 covariates of no-interest [1 0 0 0 0 0]. I replicated my results in spss and thus far everything was great.
I then learned that I should use only one of the methods (i.e. regression or anova including all of my variables), because they are statistically the same (and indeed I got comparable results using the two aforementioned contrasts). So far so good. However,I still need to be able to identify which pairs are significant for emotion and which for group. I tried to go back and do an ancova instead of the regression (which should give the same results). I was able to get the same results as the regression gave using [1 0 0 0 0 0 0; -1 0 0 0 0 0 0], but I'm not sure that actually constitutes an ancova (?).
The only other way I can think of, is by splitting the emotion covariate into 3 covariates (i.e. emotion group1, emotion group2, emotion group3; using zeros to replace values of subjects not included in the covariate), and using [1 0 -1; -1 1 0] to contrast them. The whole contrast would than use 0 0 0 to control for the 3 groups (group1 group2, group3; coded as before) and 0 0 0 0 to control for the 4 covariates of no-interest. The entire contrast would read [0 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 -1 1 0]. However, that didn't give me the expected results. O also tried to control for group using a single covariate (coded with 1,2,3), but that didn't solve it either. Any suggestion you may have as to how to solve this would be greatly appreciated!!
Sorry again for the long question, I hope it made sense...
Thank you very much!!
Liron.
Threaded View
Title | Author | Date |
---|---|---|
L R | Jul 8, 2015 | |
Alfonso Nieto-Castanon | Jul 9, 2015 | |
L R | Jul 10, 2015 | |
Alfonso Nieto-Castanon | Jul 16, 2015 | |
L R | Sep 30, 2015 | |
Alfonso Nieto-Castanon | Sep 30, 2015 | |
Alfonso Nieto-Castanon | Sep 30, 2015 | |
L R | Jul 21, 2015 | |
Alfonso Nieto-Castanon | Jul 24, 2015 | |
L R | Jul 24, 2015 | |