Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support native-space inputs #2

Open
tsalo opened this issue Oct 26, 2020 · 18 comments
Open

Support native-space inputs #2

tsalo opened this issue Oct 26, 2020 · 18 comments

Comments

@tsalo
Copy link
Member

tsalo commented Oct 26, 2020

Currently, ICA-AROMA only supports standard-space inputs (MNI152 with 2mm3 voxels, I believe). In order to maximize compatibility with other ICA classification methods (e.g., tedana), we need to support native-space inputs as well.

Here are some blockers on this:

  1. The thresholds in the classification procedure are hardcoded and linked to a specific standard space template.
  2. We need methods to derive our masks of interest (brain, edge, csf, and out-of-brain) in native space. I think we need to assume there are tissue probability maps available.
@CesarCaballeroGaudes
Copy link

Indeed, we should operate in the native space, perhaps using AFNI commands that do not interpolate the functional data at the resolution of the template, or viceversa.
1- IMO, the thresholds for spatial criteria should be the same because they are defined in terms of percentage.
2- Why not warping the MNI template of the ventricles to the native space instead? Alternatively, if tissue segmentation is available or desired to be computed, we could run FAST (FSL) or 3dSeg (AFNI) and then erode and dilate the CSF mask to keep only voxels in lateral ventricles (i.e. remove csf voxels in edges and sulci).

@tsalo
Copy link
Member Author

tsalo commented Oct 26, 2020

1- IMO, the thresholds for spatial criteria should be the same because they are defined in terms of percentage.

There are some odd ratios though. For example, csfFract is defined as the sum of z-statistics in CSF divided by the sum of z-statistics in the whole image. In native space, that is directly impacted by the coverage of the scan. Using the same template and masks for everything controls this, but in native space we'll have a problem. I think we need to scale it by the proportion of CSF voxels in the whole brain, maybe. Then we can adjust the classification threshold by that scaling factor.

2- Why not warping the MNI template of the ventricles to the native space instead? Alternatively, if tissue segmentation is available or desired to be computed, we could run FAST (FSL) or 3dSeg (AFNI) and then erode and dilate the CSF mask to keep only voxels in lateral ventricles (i.e. remove csf voxels in edges and sulci).

Depending on what data are available, I think that should work.

@CesarCaballeroGaudes
Copy link

1- Relatively. I agree with you that the number of CSF voxels will be different in the native space than in the template space. However, my opinion is that this number should not be very different (of course, using the same voxel size) at least for healthy adult young subjects, maybe not for elderly subjects because they tend to have larger ventricles. I would stick with the same value at the moment, and check whether this ratio of voxels changes considerably. Perhaps, one possibility is that we do a linear interpolation, i.e. compute the ratio of voxels for the MNI template and adapt the csfFract according to the ratio in the native space. Thus, if the ratio is the same, the csfFract would also remain the same.

@tsalo
Copy link
Member Author

tsalo commented Nov 7, 2020

Now that we're dropping native-to-MNI transforms (#45), we definitely can't warp the packaged standard space masks into native space. Which tissue probability maps do we need to get the requisite masks?

  • CSF mask: CSF tissue probability map
  • Edge mask: ??
  • Brain/out-of-brain mask: ??

@CesarCaballeroGaudes
Copy link

I would say the 3 of them:
1- CSF mask for the spatial criterion that computes the amount of variance of the spatial IC map that is within CSF ventricles. This can be computed from the CSF tissue probability map, then erode and dilate 2 voxels (e.g. 3dmask_tool -dilate_inputs -2 2) which should remove the CSF voxels in sulci and leave only the CSF in the ventricles.
2- Edge mask, similarly we need a mask that draws a ring in the borders of the brain. For instance, this can be computed by the EPI mask minus the same mask but eroded 2 voxels.
3- Do you mean a common EPI mask? We need it to constrain the ICA calculation to within brain voxels.

@CesarCaballeroGaudes
Copy link

Which tools are you planning to use to compute the masks? Since fMRIprep is also interested in performing ICA-AROMA in native space, we could use the same tools, or the same code for generating masks as Tedana so that we can later incorporate these criteria into it.

@tsalo
Copy link
Member Author

tsalo commented Nov 7, 2020

Which tools are you planning to use to compute the masks?

One of the main goals is to use pure Python, so I'm planning to use nilearn and nibabel. We can use scipy.ndimage for the erosion and dilation, possibly nilearn.masking.compute_epi_mask for the initial brain and out-of-brain masks.

Edge mask, similarly we need a mask that draws a ring in the borders of the brain. For instance, this can be computed by the EPI mask minus the same mask but eroded 2 voxels.

The EPI mask will have the ventricles filled in, so eroding it won't catch the borders in the edge mask (see below).

Perhaps if we erode it after subtracting the CSF mask from the EPI mask?

If so, then we can create the brain mask from the EPI data, the CSF mask from a CSF tissue probability map, and the edge mask from a combination of the EPI mask and CSF TPM. I don't think we'll need gray matter or white matter TPMs.

@CesarCaballeroGaudes
Copy link

CesarCaballeroGaudes commented Nov 7, 2020

The CSF mask is computed from the CSF tissue probability maps. I suggest we use the same code as fMRIprep to compute the tissue probability maps.

@CesarCaballeroGaudes
Copy link

The edge mask is the rim shown in the picture. Maybe we need more than 2 voxels.

@tsalo
Copy link
Member Author

tsalo commented Nov 7, 2020

The edge mask used by ICA-AROMA isn't just the rim. It's the internal edges too.

@CesarCaballeroGaudes
Copy link

Yes, yes, I mean the internal edges too, so maybe dilation of the epi mask by 2-3 voxels, erosion by 2-3 voxels and then subtract the dilated one minus the eroded one.

@tsalo
Copy link
Member Author

tsalo commented Nov 7, 2020

Oh okay. Yes, I get something pretty good with the following code (4 erosions), using just the provided masks:

csf_img = nib.load('mask_csf.nii.gz')
oob_img = nib.load('mask_out.nii.gz')
brain_img = image.math_img('1 - mask', mask=oob_img)
gmwm_img = image.math_img('(brain - csf) > 0', brain=brain_img, csf=csf_img)
arr = gmwm_img.get_fdata()
temp = ndimage.binary_erosion(arr, iterations=4)
temp_img = nib.Nifti1Image(temp, gmwm_img.affine, header=gmwm_img.header)
edge_img = image.math_img('brain - core', brain=gmwm_img, core=temp_img)

Plotting the resulting edge_img gives me the following, which is slightly different from the original but still pretty close:

I think that means we can use the same code on the binarized CSF tissue probably map and nilearn-generated EPI mask we'll use throughout the native-space workflow.

@tsalo
Copy link
Member Author

tsalo commented Nov 7, 2020

If the above approach is good, then we just need a threshold for the CSF tissue probability map for generating the CSF mask. Then, if the user provides a tissue probability map, we use that. Otherwise, we assume the data are in standard space and use the packaged masks.

@CesarCaballeroGaudes
Copy link

CesarCaballeroGaudes commented Nov 7, 2020

Quoting the ICA-AROMA paper:
Edge fraction
Head movements will induce strong variations in voxels that are located near intensity edges of the brain, as head motion will shift the location of the brain relative to the voxel location (i.e. voxels do not represent identical brain regions over time). Accordingly, we assessed each IC's representation near the edge of the brain. To this end we defined an edge mask by subtracting an eroded whole-brain mask (in MNI152 2 mm space, eroded using a 10 mm box kernel) from the full MNI152 2 mm mask, hence retaining the edge of the brain. Prior to the erosion we subtracted a CSF mask from the whole-brain mask such that the edges around the CSF would be included in the edge mask. Next, we calculated each IC's edge fraction as the sum of absolute Z-values of voxels overlapping the edge mask or located outside the whole-brain mask, divided by the sum of absolute Z-values of all voxels.

So, yes, that picture is pretty nice

@CesarCaballeroGaudes
Copy link

My question is whether we can also give the option of computing the tissue probability map in the program. We could use the same code as fMRIprep, nipype, or this algorithm in dipy. Or do we want to minimize any dependence?

@tsalo
Copy link
Member Author

tsalo commented Nov 7, 2020

I think I'd prefer to accept a TPM as an argument instead of trying to calculate it within the code- at least for now. If the dipy method is pure Python, though, I think that's reasonable to support in the future. What do you think?

@CesarCaballeroGaudes
Copy link

Agree, we will need to look into the code.

@oesteban
Copy link

Sorry for joining late. Great discussion!

There are some odd ratios though. For example, csfFract is defined as the sum of z-statistics in CSF divided by the sum of z-statistics in the whole image. In native space, that is directly impacted by the coverage of the scan.

Then the problem is not the coverage (except for limited FoV scans). If the resolution is the same and the whole brain is covered, then the ratio should be the same. Probably a simple correcting factor based on the ratio between the volume of the voxels in MNI ([2mm]^3) and the native space would do.

2- Why not warping the MNI template of the ventricles to the native space instead? Alternatively, if tissue segmentation is available or desired to be computed, we could run FAST (FSL) or 3dSeg (AFNI) and then erode and dilate the CSF mask to keep only voxels in lateral ventricles (i.e. remove csf voxels in edges and sulci).

I think this makes sense.

I think I'd prefer to accept a TPM as an argument instead of trying to calculate it within the code- at least for now. If the dipy method is pure Python, though, I think that's reasonable to support in the future. What do you think?

Agreed, I'd leave the responsibility for some other module. Here you really want to test that, given adequate TPMs you get similar (or better, if @tsalo and @CesarCaballeroGaudes are willing to eyeball a lot of data) components to those that the original implementation gives.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Todo
Development

No branches or pull requests

4 participants