help > Training
Showing 1-11 of 11 posts
Aug 25, 2014 09:08 AM | Nicolas Vinuesa
Training
Dear all,
We have been using W2MHS with accepetable results, but in order to achieve our goal, we would need more accuracy. As it has been discussed, this could be due to a lack of examples corresponding to our images in the trained RF model.
Therefore, we are extremely interested in performing a re-training (or a training from scratch if not possible) of the RF model using our own hand-segmented images.
It would be a great help for us if you could explain us how you did the training (if you still have the matlab code you used for it) and how the labeling (WMH vs NON-WMH labels) in the image is done.
Thanks a lot for your time
We have been using W2MHS with accepetable results, but in order to achieve our goal, we would need more accuracy. As it has been discussed, this could be due to a lack of examples corresponding to our images in the trained RF model.
Therefore, we are extremely interested in performing a re-training (or a training from scratch if not possible) of the RF model using our own hand-segmented images.
It would be a great help for us if you could explain us how you did the training (if you still have the matlab code you used for it) and how the labeling (WMH vs NON-WMH labels) in the image is done.
Thanks a lot for your time
Aug 27, 2014 04:08 PM | Vikas Singh
RE: Training
Hi Nicolas,
Can you comment on how many hand segmented images you've obtained so far? Are you able to run the semi-supervised segmentation to get hand segmented images?
--Vikas
Can you comment on how many hand segmented images you've obtained so far? Are you able to run the semi-supervised segmentation to get hand segmented images?
--Vikas
Aug 29, 2014 10:08 AM | Nicolas Vinuesa
RE: Training
Hi Vikas,
At the moment we only have 2 segmented images, and we are doing it by hand using ITK-SNAP. If you can advice us on a semi-supervised method (with available code) it would help us a lot.
The problem I have is that I don't know how to structure the "features_training.mat" matrix (which is the matrix containing the feature vector for each voxel in the training database. From what i see, you trained the model using a 119284x2000 matrix, i.e. 119284 labelled voxels, and each voxel has a 2000 lenght feature vector. Please tell me if i'm wrong.
If i am correct, then how are these 119284 voxels organized? is it just a concatenation of the voxels of different subjects?
Thanks a lot for your time
At the moment we only have 2 segmented images, and we are doing it by hand using ITK-SNAP. If you can advice us on a semi-supervised method (with available code) it would help us a lot.
The problem I have is that I don't know how to structure the "features_training.mat" matrix (which is the matrix containing the feature vector for each voxel in the training database. From what i see, you trained the model using a 119284x2000 matrix, i.e. 119284 labelled voxels, and each voxel has a 2000 lenght feature vector. Please tell me if i'm wrong.
If i am correct, then how are these 119284 voxels organized? is it just a concatenation of the voxels of different subjects?
Thanks a lot for your time
Aug 29, 2014 03:08 PM | Vamsi Ithapu
RE: Training
Hello.
features_gen.m script is used to generate the feature_training.mat file.
For each nii image on which training is to be done, we first selected the hyperintense voxels (more on that below). A binary image is then generated where in these hyperintense voxels are +1 and rest all are 0.
For our case these are the images indexed in line 4, 10-16 of the above script. The binary images correspond to '.._WMH.nii' in the script. For each such training nii image, and for each hyperintense voxel in that image we compute the feature vector (of 2000 dimensions, which is basically the intensity and texture profile in the neighbourhood of the corresponding voxel, from line 20 in the script). The label of this vector will be +1.
Similarly, non-hyperintense voxels are selected randomly from these same images, and their feature vectors are constructed, which are labelled -1. These are done for all hyperintense (and some non-hyperintense) voxels, across all images, and all such feature vectors are just concatenated (saved as features_training.mat)
The semi-supervised segmentation is done by going through all the training images one-by-one and picking the seeds for hyperintense voxels. Once this is done, a simple random walk (intensity based) is run for all the seeds to generate contiguous hyperintense regions around all the selected seeds.
Hope these comments are of help.
features_gen.m script is used to generate the feature_training.mat file.
For each nii image on which training is to be done, we first selected the hyperintense voxels (more on that below). A binary image is then generated where in these hyperintense voxels are +1 and rest all are 0.
For our case these are the images indexed in line 4, 10-16 of the above script. The binary images correspond to '.._WMH.nii' in the script. For each such training nii image, and for each hyperintense voxel in that image we compute the feature vector (of 2000 dimensions, which is basically the intensity and texture profile in the neighbourhood of the corresponding voxel, from line 20 in the script). The label of this vector will be +1.
Similarly, non-hyperintense voxels are selected randomly from these same images, and their feature vectors are constructed, which are labelled -1. These are done for all hyperintense (and some non-hyperintense) voxels, across all images, and all such feature vectors are just concatenated (saved as features_training.mat)
The semi-supervised segmentation is done by going through all the training images one-by-one and picking the seeds for hyperintense voxels. Once this is done, a simple random walk (intensity based) is run for all the seeds to generate contiguous hyperintense regions around all the selected seeds.
Hope these comments are of help.
Sep 3, 2014 03:09 PM | Vikas Singh
RE: Training
Nicolas,
We can provide you with the random walk based semi supervised segmentation code to generate the training images. Is that what you're asking for?
--Vikas.
We can provide you with the random walk based semi supervised segmentation code to generate the training images. Is that what you're asking for?
--Vikas.
Sep 4, 2014 12:09 PM | Nicolas Vinuesa
RE: Training
Dear Vikas and Vamsi,
First of all thanks a lot for your quick and helpful answers.
I will start by trying to obtain the feature vector, using features_gen.m; and for the segmentation (which we were doing by hand) i will try to apply the random walk algo (which i guess is the one provided in the toolkit: regiongrowing.m).
Just one last question, did you find the seeds in a graphic visualization software and then just pass the coordinates to the regiongrowing function? or did you use another method?
Again, thanks a lot for your time
First of all thanks a lot for your quick and helpful answers.
I will start by trying to obtain the feature vector, using features_gen.m; and for the segmentation (which we were doing by hand) i will try to apply the random walk algo (which i guess is the one provided in the toolkit: regiongrowing.m).
Just one last question, did you find the seeds in a graphic visualization software and then just pass the coordinates to the regiongrowing function? or did you use another method?
Again, thanks a lot for your time
Sep 10, 2014 04:09 PM | Vamsi Ithapu
RE: Training
Hello.
Sorry about the late reply.
Yes. The seeds were planted by some visualization tool, and they are then passed to simple random walk code. The region growing function can be used for the same.
Although, I should mention that we created the training data long time ago, and some other random walk code was used then.
Once this semi-supervision is done, generation of feature matrix entirely follows the code in features_gen.m (Please get back if the script is rusty and needs more explanations in terms of what is going on).
Thanks again for your interest. Hope these changes improve your results.
Sorry about the late reply.
Yes. The seeds were planted by some visualization tool, and they are then passed to simple random walk code. The region growing function can be used for the same.
Although, I should mention that we created the training data long time ago, and some other random walk code was used then.
Once this semi-supervision is done, generation of feature matrix entirely follows the code in features_gen.m (Please get back if the script is rusty and needs more explanations in terms of what is going on).
Thanks again for your interest. Hope these changes improve your results.
Apr 5, 2015 12:04 AM | Sasha Rivas
RE: Training
Hi,
May I know the contrast of the training images from which the training feature vectors is created?
Thanks
May I know the contrast of the training images from which the training feature vectors is created?
Thanks
Apr 13, 2015 06:04 PM | Christopher Lindner
RE: Training
We are trying to get you the contrast information but we don't know
exactly what you are looking for. The images used to train the
model have not been processed outside what is detailed in our
paper.
Chris
Chris
Apr 14, 2015 12:04 AM | Sasha Rivas
RE: Training
Originally posted by Christopher Lindner:
Hi Chris,
Thanks for your reply.
In the context of my work, I need to change some of the training parameters. However, it then doesn't work as expected when I run the classification on my test samples. I can see the range of intensity values in the training patches is between 0 and ~2500, while this is between 0 and ~0.75 in my test samples. Therefore, I thought the problem comes from the fact that my test data points are far from the training data points and as a result, I will always get all the test samples classified as -1 or +1 (please note that I'm still trying to build a regression based RF). I then tried to change the contrast of my test samples and realised that I get much reasonable results. However, the problem now is that I don't know to what scale I should change the contrast of my test samples.
It would be much appreciated if you could provide me with some details of the training image's contrasts. It might also help if I can download the training images from a repository.
Thanks in advance for your help.
Sasha
We are trying to get you the contrast
information but we don't know exactly what you are looking for. The
images used to train the model have not been processed outside what
is detailed in our paper.
Chris
Chris
Hi Chris,
Thanks for your reply.
In the context of my work, I need to change some of the training parameters. However, it then doesn't work as expected when I run the classification on my test samples. I can see the range of intensity values in the training patches is between 0 and ~2500, while this is between 0 and ~0.75 in my test samples. Therefore, I thought the problem comes from the fact that my test data points are far from the training data points and as a result, I will always get all the test samples classified as -1 or +1 (please note that I'm still trying to build a regression based RF). I then tried to change the contrast of my test samples and realised that I get much reasonable results. However, the problem now is that I don't know to what scale I should change the contrast of my test samples.
It would be much appreciated if you could provide me with some details of the training image's contrasts. It might also help if I can download the training images from a repository.
Thanks in advance for your help.
Sasha
Apr 21, 2015 10:04 PM | Christopher Lindner
RE: Training
"For our FLAIR images, there is not a restricted range of values.
On any particular image, the values we care about (i.e., the values
for voxels representing actual brain) are usually between 0-1500,
while the range for the entire image may be all over the board.
However, it is a distinct possibility that other scan sites,
scanner types, etc. may produce a different range of values."
Normalization is done automatically by the preprocessing module of W2MHS as well. Currently you cannot train your own RF Regression model for use with W2MHS but we are working on code that will allow others to use their own training data with our sceme.
Normalization is done automatically by the preprocessing module of W2MHS as well. Currently you cannot train your own RF Regression model for use with W2MHS but we are working on code that will allow others to use their own training data with our sceme.