adhd200preproc
adhd200preproc > State of the Art
Aug 11, 2011 05:08 PM | Mike Scarpati
State of the Art
Hi All,
I don't have any prior experience in working with neuroscience data, so I was wondering if anyone was willing to share anything about their current results building the classifer. I've tried a number of different methods (so far working on the CC200 time courses from the Athena pipeline), and very few have gotten significantly better than random performance using 10-fold cross-validation. I've tried looking in the literature for the accuracy that might be expected in a task like this, but most of the studies I've seen have used data from a single site. For anyone that has had success, was excluding outliers a significant part of that success. I don't expect anyone to divulge too many details since this is a competition--just hoping to get an idea of the performance required to be competitive.
Mike
I don't have any prior experience in working with neuroscience data, so I was wondering if anyone was willing to share anything about their current results building the classifer. I've tried a number of different methods (so far working on the CC200 time courses from the Athena pipeline), and very few have gotten significantly better than random performance using 10-fold cross-validation. I've tried looking in the literature for the accuracy that might be expected in a task like this, but most of the studies I've seen have used data from a single site. For anyone that has had success, was excluding outliers a significant part of that success. I don't expect anyone to divulge too many details since this is a competition--just hoping to get an idea of the performance required to be competitive.
Mike
Threaded View
Title | Author | Date |
---|---|---|
Mike Scarpati | Aug 11, 2011 | |
Pierre Bellec | Aug 25, 2011 | |
Che-Wei Chang | Aug 25, 2011 | |