open-discussion > use the package of MRIcroGL in common python coding and problem about remove haze in scripting
Showing 1-9 of 9 posts
Jun 26, 2020 03:06 PM | Lance Liu
use the package of MRIcroGL in common python coding and problem about remove haze in scripting
Hi
I'm a new user to the MRIcroGL tool. it is awesome. However, I have some issues about use some functions of the tool in the python programming. Can I use the embedded scripting function in my python code such as the gl package (import gl)
I found the functions of special interest the tool bar -> the view -> "remove haze" and "remove haze and smooth edges" functions. how can I use them in the script of the tool? I have not found any information about that part in the help document.
Thanks very much
I'm a new user to the MRIcroGL tool. it is awesome. However, I have some issues about use some functions of the tool in the python programming. Can I use the embedded scripting function in my python code such as the gl package (import gl)
I found the functions of special interest the tool bar -> the view -> "remove haze" and "remove haze and smooth edges" functions. how can I use them in the script of the tool? I have not found any information about that part in the help document.
Thanks very much
Jun 26, 2020 03:06 PM | Chris Rorden
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
I am assuming you are using the latest version (1.2.20200331). The extract function will remove haze. A minimal script might look like
this:
The help for this function describes the arguments:
You can see all the in-built functions and their usage by running the Scripting/Templates/Help menu item. The other scripts in the Scripting/Templates menu describe the functions in more detail. The NITRC wiki also provides a brief introduction to scripting. Feel free to contribute to the Wiki if you want to improve it for others.
import gl
gl.loadimage('myBrain.nii')
gl.extract(1,1,3)
gl.loadimage('myBrain.nii')
gl.extract(1,1,3)
The help for this function describes the arguments:
extract (built-in function):
extract(|b,s,t) -> Remove haze from background image. Blur edges (b: 0=no, 1=yes, default), single object (s: 0=no, 1=yes, default), threshold (t: 1..5=high threshold, 5 is default, higher values yield larger objects)
extract(|b,s,t) -> Remove haze from background image. Blur edges (b: 0=no, 1=yes, default), single object (s: 0=no, 1=yes, default), threshold (t: 1..5=high threshold, 5 is default, higher values yield larger objects)
You can see all the in-built functions and their usage by running the Scripting/Templates/Help menu item. The other scripts in the Scripting/Templates menu describe the functions in more detail. The NITRC wiki also provides a brief introduction to scripting. Feel free to contribute to the Wiki if you want to improve it for others.
Jun 28, 2020 09:06 AM | Lance Liu
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
Hi Chris
Thanks very much for the reply. I also want to know how can I use the gl package in common python coding? (more specifically in the command line rather than the graphical interface (the View/Scripting menu item)) Can I pip install the package from Python Package Manager? or it that only the embedded package in the MRIcroGL tool. Thanks
Thanks very much for the reply. I also want to know how can I use the gl package in common python coding? (more specifically in the command line rather than the graphical interface (the View/Scripting menu item)) Can I pip install the package from Python Package Manager? or it that only the embedded package in the MRIcroGL tool. Thanks
Jun 28, 2020 11:06 AM | Chris Rorden
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
MRIcroGL is a natively compiled executable using the native
widgetset of your operating system (Windows API for Windows, QT or
GTK2 for Linux and Cocoa for MacOS), with Harare accelerated
graphics and compute using OpenGL (or optionally Metal for MacOS).
Unlike FSLeyes, it is not written in Python, rather Python commands
are interpreted (using PyArg_ParseTuple) as native
commands. The benefit is that it is very fast, the
disadvantage is that you can not run it directly from the Python
command line. In this respect, it is like Blender, where you can launch it from the Python command line and have it
load a Python script. If you want to handle NIfTI images from the
Python command line, I would think FSLeyes would be a good choice. On the other hand, MRIcroGL is open
source, so if you want to extend it, you could always add in hooks
for IPC. If you want to develop this, create a fork of the Github
repository, and when you are happy with your solution generate a
pull request to share your solution with the community.
Jun 28, 2020 12:06 PM | Chris Rorden
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
By the way, you can always use the Scripting Editor in MRIcroGL as
a kind of command line, entering one or more commands at a time.
Consider paste this script into the scripting editor and choose
Control-R to run the command:
gl.minmax(0, 24, 84)
And it will change image intensity without any other effects. You
can add effects one at a time. The one special command
is gl.resetdefaults(), which sets many settings back to their
default settings. You only want to use this at the very start of a
script to ensure that all scripts start with in a common state.
import gl
gl.resetdefaults()
gl.loadimage('spm152')
You can now add one or more commands to change the image (e.g. new
instructions are applied to the exisiting state machine). So you could run the command:gl.resetdefaults()
gl.loadimage('spm152')
gl.minmax(0, 24, 84)
Aug 2, 2020 10:08 AM | Lance Liu
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
Originally posted by Chris Rorden:
Thanks so much for your reply Chris,
I have tried this, it is very useful. Threrefore I want to use some tools inside the MRIcrogl. The "remove haze and smooth" is of special insterest to me and I want to incorporate the function in my current experiment pipeline coded with python (bash is also good). I tried with the "remove haze and smooth". It will help me to remove the bracing structures in the CT scans of head (the head fixer in CT machine appears in the CT scan).
Could you tell how do you implement that? Did you just use image process tool to keep the largest connected region in the image to achieve that?
Also it will be great if the FSLeye has that function, so I can directely use that tool to remove those unnecessary parts.
Thanks so much //
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-172659890-2');
// ]]>
By the way, you can always use the Scripting
Editor in MRIcroGL as a kind of command line, entering one or more
commands at a time. Consider paste this script into the scripting
editor and choose Control-R to run the command:
gl.minmax(0, 24, 84)
And it will change image intensity without any other effects. You
can add effects one at a time. The one special command
is gl.resetdefaults(), which sets many settings back to their
default settings. You only want to use this at the very start of a
script to ensure that all scripts start with in a common
state.
import gl
gl.resetdefaults()
gl.loadimage('spm152')
You can now add one or more commands to change the image (e.g. new
instructions are applied to the exisiting state machine). So you could run the command:
gl.resetdefaults()
gl.loadimage('spm152')
gl.minmax(0, 24, 84)
Thanks so much for your reply Chris,
I have tried this, it is very useful. Threrefore I want to use some tools inside the MRIcrogl. The "remove haze and smooth" is of special insterest to me and I want to incorporate the function in my current experiment pipeline coded with python (bash is also good). I tried with the "remove haze and smooth". It will help me to remove the bracing structures in the CT scans of head (the head fixer in CT machine appears in the CT scan).
Could you tell how do you implement that? Did you just use image process tool to keep the largest connected region in the image to achieve that?
Also it will be great if the FSLeye has that function, so I can directely use that tool to remove those unnecessary parts.
Thanks so much //
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-172659890-2');
// ]]>
Aug 2, 2020 11:08 AM | Chris Rorden
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
Consider the command "Remove Haze with Options"
1. Segment the image with a multi-level Otsu's Method
https://en.wikipedia.org/wiki/Otsu%27s_m...
https://scikit-image.org/docs/dev/auto_e...
The user selects a threshold from 1..5
a. Level 1: Compute Otsu 4 level, choose darkest as "air"
b. Level 2: Compute Otsu 3 level, choose darkest as "air"
c. Level 3: Compute Otsu 2 level, choose darkest as "air"
d. Level 4: Compute Otsu 3 level, choose all but brightest as "air"
e. Level 5: Compute Otsu 4 level, choose all but brightest as "air"
Air voxels are set to darkest value in volume.
2. (Optional) If the user selects "Only largest object", we compute connectivity based on 6 neighbors (e.g. to be part of a cluster, a neighbor must share a face, sharing an edge or corner does not count). Voxels that are not part of the largest cluster are set to the darkest value in the volume.
3. (Optional) If the user chooses to "Smooth Edges" a mask is generated that contains surviving voxels that are near air (as defined by 3 dilations). Voxels in this region are blurred with the neighbors. The goal here is to ensure that voxels near the air-brain surface do not appear jagged, while interior voxels retain their details.
4. The size of the surviving clusters is dilated by two voxels, and the "air" voxels that are near the object get their original values. The goal here is to feather the edges and preserve partial volume effects.
The MRIcroGL scripting function "extract" can perform this.
extract (built-in function):
extract(|b,s,t) -> Remove haze from background image. Blur edges (b: 0=no, 1=yes, default), single object (s: 0=no, 1=yes, default), threshold (t: 1..5=high threshold, 5 is default, higher values yield larger objects)
And it is easy to call MRIcroGL scripts from Python or BASH scripts. However, the current scripting language only has commands to save bitmaps, not modified NIfTI images (e.g. you can not invoke the File/SaveNIfTI menu item from a script). I will consider adding this to a future release. However, you can easily emulate this function using a Python script using the recipe I provide above.
1. Segment the image with a multi-level Otsu's Method
https://en.wikipedia.org/wiki/Otsu%27s_m...
https://scikit-image.org/docs/dev/auto_e...
The user selects a threshold from 1..5
a. Level 1: Compute Otsu 4 level, choose darkest as "air"
b. Level 2: Compute Otsu 3 level, choose darkest as "air"
c. Level 3: Compute Otsu 2 level, choose darkest as "air"
d. Level 4: Compute Otsu 3 level, choose all but brightest as "air"
e. Level 5: Compute Otsu 4 level, choose all but brightest as "air"
Air voxels are set to darkest value in volume.
2. (Optional) If the user selects "Only largest object", we compute connectivity based on 6 neighbors (e.g. to be part of a cluster, a neighbor must share a face, sharing an edge or corner does not count). Voxels that are not part of the largest cluster are set to the darkest value in the volume.
3. (Optional) If the user chooses to "Smooth Edges" a mask is generated that contains surviving voxels that are near air (as defined by 3 dilations). Voxels in this region are blurred with the neighbors. The goal here is to ensure that voxels near the air-brain surface do not appear jagged, while interior voxels retain their details.
4. The size of the surviving clusters is dilated by two voxels, and the "air" voxels that are near the object get their original values. The goal here is to feather the edges and preserve partial volume effects.
The MRIcroGL scripting function "extract" can perform this.
extract (built-in function):
extract(|b,s,t) -> Remove haze from background image. Blur edges (b: 0=no, 1=yes, default), single object (s: 0=no, 1=yes, default), threshold (t: 1..5=high threshold, 5 is default, higher values yield larger objects)
And it is easy to call MRIcroGL scripts from Python or BASH scripts. However, the current scripting language only has commands to save bitmaps, not modified NIfTI images (e.g. you can not invoke the File/SaveNIfTI menu item from a script). I will consider adding this to a future release. However, you can easily emulate this function using a Python script using the recipe I provide above.
Aug 3, 2020 03:08 PM | Lance Liu
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
Hi Chris
Thanks, so the step 1 to step 4 are the pipeline how you implement the "remove haze and smooth edges" function. right?
So level means the threshold for the image.
follow the https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_multiotsu.html, I should first use the 1.5 as the threshold. the do the following computation?
sorry I did not get the abcde part. do you mean use Level 1 step 1? So in a, I compute the Otsu 4 levels (use 4 thresholds and set the darkest as air) b. step 2: use Otsu 3 levels (3 thresholds and set the darkest as air etc.) could you explain more detailed?
I think calling MRIcroGL scripts from Python or BASH scripts will also help me in some sense.
//
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-172659890-2');
// ]]>
Thanks, so the step 1 to step 4 are the pipeline how you implement the "remove haze and smooth edges" function. right?
So level means the threshold for the image.
follow the https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_multiotsu.html, I should first use the 1.5 as the threshold. the do the following computation?
sorry I did not get the abcde part. do you mean use Level 1 step 1? So in a, I compute the Otsu 4 levels (use 4 thresholds and set the darkest as air) b. step 2: use Otsu 3 levels (3 thresholds and set the darkest as air etc.) could you explain more detailed?
I think calling MRIcroGL scripts from Python or BASH scripts will also help me in some sense.
//
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-172659890-2');
// ]]>
Aug 3, 2020 04:08 PM | Chris Rorden
RE: use the package of MRIcroGL in common python coding and problem about remove haze in scripting
I assume you are using a recent version of MRIcroGL, e.g.
v1.2.20200707. The "View" menu has two options, "Remove Haze" and
"Remove Haze with Options". The latter shows an options window
where you set the threshold level, whether to smooth edges, and
whether to only extract the largest object. The former runs the
routine with the default options: threshold level 5, YES for smooth
edges and YES for only extract largest object.
Step 1, abcde refer to the 5 possible threshold levels (1..5). Note that "Level 3" uses the classical Otsu method for segmenting the image into two classes (e.g. black and white, as described in the wiki page). The other options separate the image into more classes (the multi-Otsu described on the SciKit page). For example, consider a T1-weigthed scan where looking at a histogram reveals roughly three class of brightness: dark (hard bone, air, csf), medium (gray matter, muscle, veins, soft bone) and bright (fat, white matter). For such an image, a two-level Otsu method may not reliably find a satisfactory threshold for dark versus other tissue, while a three-level classification would be a better fit.
While setting air to have a uniform dark value is useful for visualization and leads to smaller compressed file sizes, you need to think carefully regarding whether this is a method that you want to use earlier in an image processing pipeline. Specifically, the algorithm may not distinguish between different classes of dark tissue (e.g. air, bone, css for T1 scan). Further, mixture of gaussian methods for segmentation may assume an unrealistically narrow variance for dark tissues. Additionally, homogeneity bias correction methods may be deprived of useful variations in intensity. In general, I would most processing tools have numerous implicit assumptions for how an MRI scan appears. Therefore, unless you carefully inspect your pipeline, I would restrict the "remove haze" step to the final visualization step.
Step 1, abcde refer to the 5 possible threshold levels (1..5). Note that "Level 3" uses the classical Otsu method for segmenting the image into two classes (e.g. black and white, as described in the wiki page). The other options separate the image into more classes (the multi-Otsu described on the SciKit page). For example, consider a T1-weigthed scan where looking at a histogram reveals roughly three class of brightness: dark (hard bone, air, csf), medium (gray matter, muscle, veins, soft bone) and bright (fat, white matter). For such an image, a two-level Otsu method may not reliably find a satisfactory threshold for dark versus other tissue, while a three-level classification would be a better fit.
While setting air to have a uniform dark value is useful for visualization and leads to smaller compressed file sizes, you need to think carefully regarding whether this is a method that you want to use earlier in an image processing pipeline. Specifically, the algorithm may not distinguish between different classes of dark tissue (e.g. air, bone, css for T1 scan). Further, mixture of gaussian methods for segmentation may assume an unrealistically narrow variance for dark tissues. Additionally, homogeneity bias correction methods may be deprived of useful variations in intensity. In general, I would most processing tools have numerous implicit assumptions for how an MRI scan appears. Therefore, unless you carefully inspect your pipeline, I would restrict the "remove haze" step to the final visualization step.