-
Notifications
You must be signed in to change notification settings - Fork 141
Description
Currently the leave-one-out (pairwise=False) ISC and ISFC functions use np.mean to average across N–1 subjects prior to computing correlation with the left-out subjects. This means a NaN for any single subject will kill the mean time series. NaNs may occur during z-scoring prior to ISC computation because some voxels have zero variance—especially at edges of the brain due to limited FoV for EPI scans and susceptibility dropout (especially bad for old datasets). See attached figure where the colored values indicate how many subjects (out of 20 total) have NaNs for a given voxel. Currently all colored voxels in this figure would yield a NaN after ISC computation. I propose using np.nanmean rather than np.mean when computing the average time series across N–1 subjects so that NaNs can be accommodated. For example, in the case corresponding to the attached figure, I may opt to completely exclude all voxels where 50% or 90% or more subjects have NaNs, but I would still like to include voxels where only a couple subjects have NaNs. This would require some special care by the user, so I wanted to run this past others. Not sure how accepting of NaNs we should be... I suppose we could also have a keyword argument like allow_nans=True for flexibility (or even allow_nans=.9). Any thoughts on this? @mihaic @qihongl @manojneuro