Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CBMA simulations #1

Open
6 tasks
jdkent opened this issue Mar 29, 2021 · 3 comments
Open
6 tasks

CBMA simulations #1

jdkent opened this issue Mar 29, 2021 · 3 comments

Comments

@jdkent
Copy link
Member

jdkent commented Mar 29, 2021

Summary

Demonstrate the usability/stability of NiMARE's CBMA estimators and provide soft recommendations
for users. work builds off:

Additional details

  • systematic comparison of CBMA estimators with simulated and real data

Next steps

Steps:

  • decide on null data generation process
    • spatial distribution options:
      • choose voxels uniformly from a (gray matter) mask (essentially copy the empirical generation method)
      • pull randomly from a probabilistic map (Eickhoff 2016)
      • create an advanced model (possibly a gaussian mixture model) to account for spatial distribution of coordinates?
    • choose number of participants to simulate
    • choose number of foci to simulate
    • choose number of study contrasts to include in simulated meta-analysis
  • compare empirical and analytic estimation on null data to test false positive rates
    • (use analytic null for further analyses if they are well correlated with the empirical null)
    • how should we determine kernel size/how many kernel sizes should we test over?
  • Select studies where we have group level statistical maps:
    • Datasets available:
      • naturalistic imaging
      • pain dataset
      • others?
    • another consideration to ensure IBMA is reaching a consistent result for comparing to CBMA is to treat individual statistical maps as group maps.
  • select several IBMA methods to compare to CBMA results
    • Which methods should we use?
      • Fishers
      • does not take into account variances
      • Stouffers
        • does not take into account variances
      • WeightedLeastSquares
      • DerSimonianLaird
      • Hedges
        • gave poor results using the naturalistic imaging datasets
      • SampleSizeBasedLikelihood
      • VarianceBasedLikelihood
      • PermutedOLS
    • What thresholds should be applied to the results?
  • convert the images to coordinate datasets:
    • what parameters should be chosen/varied?
      • min distance between clusters? (default 8 mm)
      • stat threshold (z=3.1?)
      • cluster_threshold
  • run ALE, KDA, and MKDA on generated coordinate datasets
    • choices on kernel size?
      • ALE kernel can be decided based on sample size...
    • should the output be thresholded at multiple levels (0.01, 0.001)
    • should there be FDR/FWER corrections applied to the output?
  • compare CBMA maps to IBMA maps
    • what metric(s) should we choose?
      • Dice similarity
      • Correlation
      • "True" Positive Rate
        • (which voxels were statistically significant with IBMA and CBMA)
      • False Positive Rate
        • (which voxels were statistically non-significant with IBMA, but significant with CBMA)
    • Should we "hold out" some of the data to see if the results (which estimator with what parameters is most like IBMA) generalize to new data?
    • the IBMA data will contain positive and negative values, whereas the CBMA analyses will only contain positive values (that may represent either or negative statistical peaks), probably just want to observe positive values and or compare similarity to positive and negative values separately.
@nicholst
Copy link

nicholst commented Apr 7, 2021

Looks good. I'd also point you to this reference, section 4.1

  • Samartsidis, P., Montagna, S., Johnson, T. D., & Nichols, T. E. (2017). The Coordinate-Based Meta-Analysis of Neuroimaging Data. Statistical Science, 32(4), 580–599. https://doi.org/10.1214/17-STS624

... this work carefully tunes the proportion of studies possessing an effect.

@nicholst
Copy link

nicholst commented Apr 7, 2021

Other random comments

Inevitably, there will be many different parameters that will be varied, and it will be impossible to evaluate everything over a full-factorial exploration of all possible settings. Hence, for each parameter identified to be varied, be sure to identify the 'default' value the value, when varying other parameters, the parameter will take on.

Null data generation process, spatial distribution options:

  • choose voxels uniformly from a (gray matter) mask (essentially copy the empirical generation method).

  • pull randomly from a probabilistic map (Eickhoff 2016)
    -- For both, need to choose distribution and mean number of foci per study

  • create an advanced model (possibly a gaussian mixture model) to account for spatial distribution of coordinates?
    -- See Samartsidis et al. (2017), §4.1.

  • choose number of participants to simulate
    -- This is perhaps the most important single parameter... will need a range... do we have any emperical evidence on number of studies typically used?

  • choose number of foci to simulate
    -- See above; number must be random, and have to decide on the distribution for this randomness.

  • choose number of study contrasts to include in simulated meta-analysis
    -- Really tricky issue... most methods ignore issue of 2 or more contrasts within a given paper, and this paper suggests that it really doesn't matter... maybe skip this issue? (i.e. just assume each study contributes a single contrast).

  • compare empirical and analytic estimation on null data to test false positive rates
    -- As noted below, need to compute various FPR measures... average FPR per voxel at some uncorrected threshold (if possible at all), FWE voxelwise, FWE clusterwise.

  • how should we determine kernel size/how many kernel sizes should we test over?
    -- Another whole ball of wax... ignore or pick some small set (e.g. 3

(Agree with your listed methods to use)

  • What thresholds should be applied to the results?
    -- See inference comment above.

  • convert the images to coordinate datasets:

    • what parameters should be chosen/varied?
    • min distance between clusters? (default 8 mm)
    • stat threshold (z=3.1?)
      cluster_threshold
      -- All reasonable
  • should the output be thresholded at multiple levels (0.01, 0.001)

  • should there be FDR/FWER corrections applied to the output?
    -- I would pick a set of inferential methods (e.g. uncorrected 0.01, 0.001, voxel FWER 0.05, cluster FWER 0.05) and then run them on all null and real data evaluations.

-- Agree on the choices of metrics

Should we "hold out" some of the data to see if the results (which estimator with what parameters is most like IBMA) generalize to new data?
-- One of the biggest problems is that ALE is not a generative model... so given a ALE map, how would you assert that an out-fo-sample IBMA or CBMA sample is similar?
-- About negative, no big deal: Evaluate similarity twice: once on the whole image, once only on where truth is positive.

@tsalo
Copy link
Member

tsalo commented Jun 24, 2021

@jdkent created a repository (https://github.com/neurostuff/simulate-cbma) for the analyses. I'm not sure where it stands in relation to the analysis plan in this issue, but it looks like there's a lot there.

@tsalo tsalo transferred this issue from neurostuff/NiMARE Nov 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants