Skip to content 4.6 KiB
Newer Older

## Hierarchical drift diffusion modeling

Sequential sampling models, such as the drift-diffusion model (DDM 28), have been used to characterize evolving perceptual decisions in 2-alternative forced choice (2AFC) random dot motion tasks 74, where the evolving decision relates to overt stimulus dynamics. In contrast to such applications, evidence integration here is tied to eidetic memory traces following the probe onset, similar to applications during memory retrieval 76 or probabilistic decision making 77. Here, we estimated individual evidence integration parameters within the HDDM 0.6.0 toolbox 78 to profit from the large number of participants that can establish group priors for the relatively sparse within-subject data. Independent models were fit to data from the EEG and the fMRI session to allow reliability assessments of individual estimates. Premature responses faster than 250 ms were excluded prior to modeling, and the probability of outliers was set to 5%. 7000 Markov-Chain Monte Carlo samples were sampled to estimate parameters, with the first 5000 samples being discarded as burn-in to achieve convergence. We judged convergence for each model by visually assessing both Markov chain convergence and posterior predictive fits. Individual estimates were averaged across the remaining 2000 samples for follow-up analyses.

We fitted data to correct and incorrect RTs (termed ‘accuracy coding‘ in Wiecki, et al. 78). To explain differences in decision components, we compared four separate models. In the ‘full model’, we allowed the following parameters to vary between conditions: (i) the mean drift rate across trials, (ii) the threshold separation between the two decision bounds, (iii) the non-decision time, which represents the summed duration of sensory encoding and response execution. In the remaining models, we reduced model complexity, by only varying (a) drift, (b) drift + threshold, or (c) drift + NDT, with a null model fixing all three parameters. For model comparison, we first used the Deviance Information Criterion (DIC) to select the model which provided the best fit to our data. The DIC compares models on the basis of the maximal log-likelihood value, while penalizing model complexity. The full model provided the best fit to the empirical data based on the DIC index (Supplementary Figure 1c) in both the EEG and the fMRI session. However, although this model did indicate an increase in decision thresholds (i.e., boundary separation), there was no equivalent effect noted in the electrophysiological data (Supplementary Figure 1d). We therefore fixed the threshold parameter across conditions, in line with previous work constraining DDM model parameters on the basis of electrophysiological evidence30.

## Code overview


- create .mat and .dat files for HDDM toolbox
- (check if .mat are used)


- main scripts to perform the models

To [execute the notebooks]( programatically from the command line (i.e., without opening the Jupyter notebooks):

mkvirtualenv --python=$(which python3) hddm_env
pip install hddm
pip install jupyter
# install hddm_env kernel
python -m ipykernel install --user --name=hddm_env
# important: activate the environment if not already active
workon hddm_env
# run notebook with hddm_env kernel
jupyter nbconvert --ExecutePreprocessor.kernel_name='hddm_env' --to notebook --execute b_HDDM_modeling_EEG_YA_vt.ipynb
Julian Kosciessa's avatar
Julian Kosciessa committed


- manually copy DICs from the jupyter notebooks
- plot DIC by session and age for the selected models


Julian Kosciessa's avatar
Julian Kosciessa committed

- posterior predictive checks: plot predicted vs. observed RTs given the parameters

Julian Kosciessa's avatar
Julian Kosciessa committed
Julian Kosciessa's avatar
Julian Kosciessa committed


- raincloudplots of avg. parameters by load



- assess parameter interrelations


- compare CPP slopes between YA and OA



- compare parameters at l1 and linear modulation between YA and OA

Julian Kosciessa's avatar
Julian Kosciessa committed


- plot RT distributions, aligned to NDT from load 1



- supplementary analysis (YA only, reported in NatComms) assessing the whether changes in target agreement are sufficient to capture load effects