Thus, I want to have VisuAlign output one of the following: 1) ABA labels at higher level of granularity, or 2) assign intensity values to specific brain regions such that the label intensity for each ROI is the same across brains. Is there any way to accomplish this by swapping out or editing the install files, or some other route of action?
Thank you for your time!
However it may be worth pointing out here that the two kinds of PNG sets are more meant for a quick view than for further processing. For the latter purpose the ".flat" files may still be a better choice even with their weirdness, https://www.nitrc.org/plugins/mwiki/index.php?title=visualign:Flat_file_format
But, the request is doable too, it's already there: VisuAlign has its atlas data in its "cutlas" folders, ABA_Mouse_CCFv3_2017_25um.cutlas may be the one in use here. Inside the folder there is a NIfTI file with the actual segmentation and a text file with the label descriptors (there is a short summary about the structure at the beginning of the file). Technically it's an ITK label file, just there are some labels with extreme high values, which ITK Snap usually refuses to load (but I haven't tried recently).
I hope this helps, best regards,
Gergely Csucs
Hi Gergely,
Thank you for your reply. I looked into the .flat file link you
provided and installed the pyflat package. However, I could not
figure out how to use pyflat.py to output a greyscale filetype with
ABA greyscale values so I could use quantify fluorescence based on
label intensity. I tried running the following line from the
terminal within the directory containing visualign outputs:
python *\pyflat.py flat=4728114R_RGB_0000_nl.flat
output=Grayscale.png
Even though the code ran without an error, no "Greyscale.png"
output was produced.
Ultimately, I envision a Numpy script where I can use the
Visualigned atlas labels to mask each brain region and quantify
fluorescence values, cell counts, etc in the original fluorescence
image. The impedement to this is obtaining consistent greyscale
visualigned atlas labels of high anatomical granularity to serve as
the mask. I think I can achieve this by manually modifying all RGB
values in the labels.txt file such that *_nl.png outputs have
higher anatomical granularity and give appropriate and consistent
grey values when converted to 16 bit images. Although, this
approach requires me to manually change each of the 500+ RGB values
in labels.txt file. Do you know of an easier way for me to
accomplish this goal?
Visualign performs really well for registration, and I'm not aware
of any other program that does so well (current apporach is
Deepslice -> QuickNII -> Visualign -> custom FIJI macros
to quantify fluorescence). Any advice would be greatly
appreciated.
Kind regards,
Austen
I figured out how to change the RGB values using copy/paste from
excel files I already had for each ABA label (attached), but I
guess I didn't think it through entirely. Even though I now get ABA
RGB values at a high level of anatomical granularity for the
Visualign *nl.png outputs, the greyscale version doesn't match ABA
labels (which is my goal here). Any tips on getting the visualign
outputs to give me .png images with atlas labels instead of RGB
values?
Thanks!
A simple thing is throwing away the eye candy part and just make all colors unique, for a computer, not necessarily to the human eye. And then based on the capabilities/needs of further software in use, either look up labels from the altered color table, or use a direct decoding approach.
This piece of code expects three parameters: label=somelabels.txt is the input, so the original label file taken from a "cutlas" folder of VisuAlign for example. valabel=someotherfile.txt is an output, preserving the identifiers, but applying the replacement of the colors. The third parameter is decode=yetanotherfile.txt, provides a decoding aid, may or may not be useful.
import sys,re
args={}
for arg in sys.argv[1:]:
pair=arg.split("=")
args[pair[0]]=pair[1]
header=[]
palette=[]
with open(args["label"]) as f:
with open(args["valabel"],"w") as lva:
with open(args["decode"],"w") as ldec:
idx=0
for line in f:
lbl=re.match(r'\s*(\d+)\s+(\d+)\s+(\d+)\s+(\d+)\s+(.*)',line)
if not lbl:
lva.write(line)
ldec.write(line)
else:
lva.write(f'{lbl[1]}\t{idx & 255}\t{idx >> 8}\t0\t{lbl[5]}\n')
ldec.write(f'{idx}\t{idx & 255}\t{idx >> 8}\t0\t{lbl[5]}\n')
idx+=1
I called it relabel.py
, and an example run is
python.exe relabel.py label=labels.txt valabel=valabel.txt
decode=declabel.txt
Here valabel.txt
is the one preserving identifiers,
but replacing colors, practically it starts with R=G=B=0, then
increases R from 0 to 255, then increases G to 1, and starts over
from R=0. With this file you can overwrite labels.txt
in the corresponding cutlas folder of VisuAlign (of course it's a
good idea to keep a copy of the original), do an export, and then
all colors are going to be unique, though not very pleasing to the
eye.
Then whatever software is used next, may be able to look up labeling from this modified label file already.
The other output (declabel.txt
) may come handy if
custom software is used. As the RGB color components form a single,
continuous index now (well, B is 0), that index can be decoded
directly, index=R+G*256
. This label file contains such
numbers, ranging from "clear label" with R=G=B=index=0, to "retina"
with R=47, G=5, (B=0), id=1327, which happens to be 5*256+47.
I hope this helps, best regards,
Gergely
Hello Gergely
Hi Aaron,
I tried using a modified python code to read the flat file, however, the output is a low-res PNG. What would I need to do in order to get a high-res PNG similar to what is the final output images using NUTIL?
I am specifically interested in extracting the distinct cerebellar nuclei. If each of these nuclei have different colors, I can use fiji to extract ROI based on color information. However, currently the labeling in the final output of NUTIL yields the same color for all cerebellar nuclei.
I hope this helps.
Best regards,
Gergely
Hi Gergely
Attached is the traceback. Also, I looked into your other solutions regarding upscaling the images, but for some reason, the downscaled images and the hi-res segmentations have different sizes and lengthXwidth ratios? Not sure what is going on here. I attached an example of this.
Appreciate your help on this matter very much.
Best
Aaron