4  Experiment 1b: No Context Reinstatement

Below are the packages you should install to ensure this document runs properly.

Code
#load packages
library(plyr)
library(easystats)
library(tidyverse)
library(knitr)
library(ggeffects)
library(here)
library(data.table)
library(ggrepel)
library(brms)
library(ggdist)
library(emmeans)
library(tidylog)
library(tidybayes)
library(hypr)
library(cowplot)
library(tidyverse)
library(colorspace)
library(ragg)
library(cowplot)
library(ggtext)
library(MetBrewer)
library(ggdist)
library(modelbased)
library(flextable)
library(cmdstanr)
library(brms)
library(Rfssa)
library(easystats)
library(gt)
library(knitr)

options(digits = 3)
options(timeout=200)
options(set.seed(666))

4.1 Figure Theme

Code
bold <- element_text(face = "bold", color = "black", size = 16) #axis bold

theme_set(theme_bw(base_size = 15, base_family = "Arial"))

theme_update(
  panel.grid.major = element_line(color = "grey92", size = .4),
  panel.grid.minor = element_blank(),
  axis.title.x = element_text(color = "grey30", margin = margin(t = 7)),
  axis.title.y = element_text(color = "grey30", margin = margin(r = 7)),
  axis.text = element_text(color = "grey50"),
  axis.ticks =  element_line(color = "grey92", size = .4),
  axis.ticks.length = unit(.6, "lines"),
  legend.position = "top",
  plot.title = element_text(hjust = 0, color = "black", 
                            family = "Arial",
                            size = 21, margin = margin(t = 10, b = 35)),
  plot.subtitle = element_text(hjust = 0, face = "bold", color = "grey30",
                               family = "Arial", 
                               size = 14, margin = margin(0, 0, 25, 0)),
  plot.title.position = "plot",
  plot.caption = element_text(color = "grey50", size = 10, hjust = 1,
                              family = "Arial", 
                              lineheight = 1.05, margin = margin(30, 0, 0, 0)),
  plot.caption.position = "plot", 
  plot.margin = margin(rep(20, 4))
)
pal <- c(met.brewer("Veronese", 3))
Code
## flat violinplots
### It relies largely on code previously written by David Robinson 
### (https://gist.github.com/dgrtwo/eb7750e74997891d7c20) and ggplot2 by H Wickham
#check if required packages are installed
#Load packages
# Defining the geom_flat_violin function. Note: the below code modifies the 
# existing github page by removing a parenthesis in line 50

geom_flat_violin <- function(mapping = NULL, data = NULL, stat = "ydensity",
                             position = "dodge", trim = TRUE, scale = "area",
                             show.legend = NA, inherit.aes = TRUE, ...) {
  layer(
    data = data,
    mapping = mapping,
    stat = stat,
    geom = GeomFlatViolin,
    position = position,
    show.legend = show.legend,
    inherit.aes = inherit.aes,
    params = list(
      trim = trim,
      scale = scale,
      ...
    )
  )
}
# horizontal nudge position adjustment
# copied from https://github.com/tidyverse/ggplot2/issues/2733
position_hnudge <- function(x = 0) {
  ggproto(NULL, PositionHNudge, x = x)
}
PositionHNudge <- ggproto("PositionHNudge", Position,
                          x = 0,
                          required_aes = "x",
                          setup_params = function(self, data) {
                            list(x = self$x)
                          },
                          compute_layer = function(data, params, panel) {
                            transform_position(data, function(x) x + params$x)
                          }
)

5 Method

All raw and summary data, materials, and R scripts for pre-processing, analysis, and plotting for Experiment 2 can be found at https://osf.io/6sy7k/

5.1 Participants

We used the same sample size as Experiment 1a (N = 216). All participants were recruited through the university subject pool at Rutgers University (SONA). We used a similar exclusion criteria to Experiment 1a. Because of this, we oversampled we randomly chose 36 participants from each list to reach our target sample size.

5.2 Apparatus, stimuli, design, procedure, and analysis

Similar to Experiment 1a, the experiment was run using PsychoPy (Peirce et al., 2019) and hosted on Pavlovia (www.pavlovia.org). You can see an example of the experiment by navigating to this website: https://run.pavlovia.org/Jgeller112/ldt_dd_l1_jol. You can also download the source code for the experiment at this site.

We used the same stimuli from Experiment 1a. The main difference between Experiment 1a and 1b was all items were presented in a clear, Arial font. To make it more similar to Experiment 1a each set of words presented as clear, low blur, and high blur at study were yoked to a set of new words that were counterbalanced across lists. Therefore, instead of there being one false alarm rate there were 3, one for each blurring level. This ensured each word was compared to studied clear, studied high blurred, and studied low blurred words.

We fit the same statistical models as Experiment 1a.

6 Results

6.1 Accuracy

The data file is cleaned (participants >=.8, no duplicate participants, no participants < 17. )

Code
# get data from osf
blur_acc <- read_csv("https://osf.io/excgd/download") %>%
    dplyr::filter(lex=="m")


blur_acc_new<- blur_acc %>%
  dplyr::filter(rt >= .2 & rt <= 2.5)

head(blur_acc)
# A tibble: 6 × 14
   ...1 participant age   date  string study blur     rt  corr lex   list  bad_1
  <dbl>       <dbl> <chr> <chr> <chr>  <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
1     1      198670 18    2022… POODLE old   LB    0.702     1 m     L3    keep 
2     4      198670 18    2022… CRUST  old   C     0.773     1 m     L3    keep 
3    10      198670 18    2022… GREASE old   LB    0.617     1 m     L3    keep 
4    11      198670 18    2022… SCREW  old   LB    0.647     1 m     L3    keep 
5    14      198670 18    2022… CRADLE old   HB    1.35      1 m     L3    keep 
6    16      198670 18    2022… THROAT old   LB    0.664     1 m     L3    keep 
# ℹ 2 more variables: bad_2 <chr>, bad_3 <chr>
Code
dim(blur_acc)
[1] 18144    14
Code
dim(blur_acc_new)
[1] 17809    14

The analysis of accuracy is is based on 18144 data points. After removing fast and slow RTs we were left with 17809 data point (0.018 %)

6.2 Contrast Code

Code
## Contrasts
#hypothesis
blurC <-hypr(HB~C, HB~LB, levels=c("C", "HB", "LB"))
blurC
hypr object containing 2 null hypotheses:
H0.1: 0 = HB - C
H0.2: 0 = HB - LB

Call:
hypr(~HB - C, ~HB - LB, levels = c("C", "HB", "LB"))

Hypothesis matrix (transposed):
   [,1] [,2]
C  -1    0  
HB  1    1  
LB  0   -1  

Contrast matrix:
   [,1] [,2]
C  -2/3  1/3
HB  1/3  1/3
LB  1/3 -2/3
Code
#set contrasts in df 
blur_acc$blur <- as.factor(blur_acc$blur)

contrasts(blur_acc$blur) <-contr.hypothesis(blurC)

6.3 BRMs: Accuracy Model

Code
#weak prior
prior_exp1 <- c(set_prior("cauchy(0,.35)", class = "b"))

#fit model
fit_acc_weak <- brm(corr ~ blur + (1+blur|participant) + (1+blur|string), data=blur_acc_new, 
warmup = 1000,
                    iter = 5000,
                    chains = 4, 
                    init=0, 
                    family = bernoulli(),
     cores = 4,
prior = prior_exp1, 
control = list(adapt_delta = 0.9), 
backend="cmdstanr", 
save_pars = save_pars(all=T),
sample_prior = T, 
threads = threading(4), 
file="fit_acc_weak_nocontext")
Code
# get file from osf
tmp <- tempdir()
download.file("https://osf.io/ne36z/download", 
              file.path(tmp, "acc_blmm_expnocontext.RData"))
load(file.path(tmp, "acc_blmm_expnocontext.RData"))

fit_acc_lbc <- read_rds("https://osf.io/yhz4c/download")

6.4 Model Summary

6.4.1 Hypotheses

Code
acc_means <- emmeans(fit_acc_noc, specs="blur", type="response") %>%
  as.data.frame()
Code
a = hypothesis(fit_acc_noc , "blur1 < 0")
b= hypothesis(fit_acc_noc , "blur2 < 0")
c= hypothesis(fit_acc_lbc, "blur1 =  0")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis)%>% 
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))

tab %>% 
   mutate(Hypothesis = c("High Blur - Clear < 0", "High Blur - Low Blur < 0", "Low Blur - Clear = 0 ")) %>% 
  gt(caption=md("Table: Experiment 1b Accuracy")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Experiment 1b Accuracy
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear < 0 -1.111 0.206 -1.454 -0.783 Inf 1.000
High Blur - Low Blur < 0 -1.072 0.187 -1.384 -0.769 Inf 1.000
Low Blur - Clear = 0 0.069 0.111 -0.149 0.285 0.905 0.475

6.4.2 Accuracy Figures

Code
top_mean <-blur_acc%>%  #get means for each blur cond for plot
  dplyr::filter(lex=="m")%>%
  group_by(blur)%>%
   dplyr::summarise(mean1=mean(corr)) %>%
  dplyr::ungroup()


p_mean <-blur_acc %>%  #get means participant x cond for  plottin
  dplyr::filter(lex=="m")%>%
    dplyr::group_by(participant, blur)%>%
     dplyr::summarise(mean1=mean(corr))


p3 <- ggplot(p_mean, aes(x = blur , y = mean1, fill = blur)) +
    coord_cartesian(ylim = c(.5,1)) + 
  
  ggdist::stat_halfeye(
    aes(
      y = mean1,
      color = blur,
      fill = after_scale(lighten(color, .5))
    ),
    shape = 18,
    point_size = 3,
    interval_size = 1.8,
    adjust = .5,
    .width = c(0, 1)
  ) +
    geom_point(aes(x = blur, y = mean1, colour = blur),position = position_jitter(width = .05), size = 1, shape = 20)+
    geom_boxplot(aes(x = blur, y = mean1, fill = blur),outlier.shape = NA, alpha = .5, width = .1, colour = "black")+
  labs(subtitle = "Word Accuracy: No Context Reinstatement")+
     scale_color_manual(values=met.brewer("Cassatt2", 3))+
 scale_fill_manual(values=met.brewer("Cassatt2", 3))+
    stat_summary(fun=mean, geom="point", colour="darkred", size=3)+
    labs(y = "Accuracy", x = "Blur") +
    geom_label_repel(data=top_mean, aes(y=mean1, label=round(mean1, 2)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5) + 
    theme(axis.text=bold) + theme(legend.position = "none") 
  #  ggsave('place.png', width = 8, height = 6)
p3

6.4.3 Accuracy

Clear words were better identified (\(M\) = .987) compared to high blur words (\(M\) = .962), \(b\) = -1.111, 95% Cr.I[-1.454, -0.783], ER = . Low blurred words were better identified (\(M\) = .\(M\) = .987) than high blurred words, \(b\) = -1.072, 95% Cr.I[-1.384, -0.769], ER = . However, the evidence was weak for there being no significant difference in the identification accuracy between clear and low blurred words, b = 0.069, 95% Cr.I[-0.149, 0.285], ER = 0.905.

6.5 RTs

6.6 BRMs: Ex-Gaussian

Code
#load data from osf
rts <- read_csv("https://osf.io/excgd/download")
Code
blur_rt<- rts %>%
  group_by(participant) %>%
   dplyr::filter(corr==1, lex=="m")#only include nonwords

blur_rt_new <- blur_rt %>% 
  dplyr::filter(rt >= .2 & rt <= 2.5) %>%
  mutate(rt_ms=rt*1000)


dim(blur_rt)
[1] 17222    14
Code
dim(blur_rt_new)
[1] 16939    15

The analysis of RTs (correct trials and words) is is based on 16939 data points, after removing fast and slow RTs (0.016 %)

6.6.1 Density Plots

Code
p <- ggplot(blur_rt_new, aes(rt_ms, group = blur, fill = blur)) +
  geom_density(colour = "black", size = 0.75, alpha = 0.5) +
  scale_fill_manual(values=c("grey40", "orange1", "red")) +
  theme(axis.title = element_text(size = 16, face = "bold", colour = "black"), 
        axis.text = element_text(size = 16, colour = "black"), 
        plot.title = element_text(face = "bold", size = 20)) +
  coord_cartesian(xlim=c(600, 1100)) +
  scale_x_continuous(breaks=seq(600,1100,100)) +
  labs(title = "Density Plot By Blur", y = "Density", x = "Response latencies in ms") + 
    theme_bw() 

p

6.7 Contrasts

Code
#hypothesis
blurC <-hypr(HB~C, HB~LB, levels=c("C", "HB", "LB"))
blurC
hypr object containing 2 null hypotheses:
H0.1: 0 = HB - C
H0.2: 0 = HB - LB

Call:
hypr(~HB - C, ~HB - LB, levels = c("C", "HB", "LB"))

Hypothesis matrix (transposed):
   [,1] [,2]
C  -1    0  
HB  1    1  
LB  0   -1  

Contrast matrix:
   [,1] [,2]
C  -2/3  1/3
HB  1/3  1/3
LB  1/3 -2/3
Code
#set contrasts in df 
blur_rt$blur <- as.factor(blur_rt$blur)

contrasts(blur_rt$blur) <-contr.hypothesis(blurC)

6.7.1 Ex-Gaussian

6.7.1.1 Model Set-up

Code
bform_exg1 <- bf(
rt ~ 0+ blur + (1 + blur |p| participant) + (1 + blur|i| string),
sigma ~ 0+ blur + (1 + blur |p|participant) + (1 + blur |i| string),
beta ~ 0 + blur + (1 + blur |p|participant) + (1 + blur |i| string))

6.7.1.2 Run Model

Code
prior_exp1 <- c(set_prior("normal(0,10)", class = "b"), 
                 set_prior("normal(0,10)", class = "b", dpar="sigma"), 
                 set_prior("normal(0,10)", class = "b", dpar="beta")
                

fit_exg1 <- brm(
bform_exg1, data = blur_rt,
warmup = 1000,
                    iter = 5000,
                    chains = 4,
                    prior = prior_exp1,
                    family = exgaussian(),
                    init = 0,
                    cores = 4, 
sample_prior = T, 
save_pars = save_pars(all=T),
control = list(adapt_delta = 0.8), 
backend="cmdstanr", 
threads = threading(4))
Code
#load rdata for model 
#load_github_data("https://osf.io/uxc2f/download")


fit_exg1 <- read_rds("https://osf.io/egqyt/download")

6.7.2 Model summary

6.7.2.1 Hypotheses

Code
a <- hypothesis(fit_exg1, "blurHB - blurC > 0", dpar="mu")

b <- hypothesis(fit_exg1, "blurHB - blurLB > 0", dpar="mu")

c <- hypothesis(fit_exg1, "blurLB - blurC > 0", dpar="mu")

d <- hypothesis(fit_exg1, "sigma_blurHB - sigma_blurC > 0", dpar="sigma")

e <- hypothesis(fit_exg1, "sigma_blurHB - sigma_blurLB > 0", dpar="sigma")

f <- hypothesis(fit_exg1, "sigma_blurLB - sigma_blurC = 0", dpar="sigma")

g <- hypothesis(fit_exg1, "beta_blurHB - beta_blurC > 0", dpar="beta")

h <- hypothesis(fit_exg1, "beta_blurHB - beta_blurLB > 0", dpar="beta")

i <- hypothesis(fit_exg1, "beta_blurLB - beta_blurC = 0", dpar="c")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis, h$hypothesis, i$hypothesis) %>% 
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))

tab %>% 
  mutate(parameter=c("mu","mu", "mu",  "sigma", "sigma", "sigma", "beta", "beta", "beta"))%>%
  mutate(Hypothesis = c("High Blur - Clear > 0", "High Blur - Low Blur > 0", "Low Blur - Clear >  0 ", "High Blur - Clear > 0", "High Blur - Low Blur > 0", "Low Blur - Clear =  0","High Blur - Clear > 0", "High Blur - Low Blur > 0", "Low Blur - Clear = 0  ")) %>%
  gt(caption=md("Table: Ex-Gaussian Model Results Experiment 1")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Ex-Gaussian Model Results Experiment 1
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob parameter
High Blur - Clear > 0 0.208 0.009 0.194 0.222 Inf 1.000 mu
High Blur - Low Blur > 0 0.194 0.009 0.180 0.209 Inf 1.000 mu
Low Blur - Clear > 0 0.014 0.003 0.008 0.019 Inf 1.000 mu
High Blur - Clear > 0 0.273 0.075 0.149 0.393 3199.0 1.000 sigma
High Blur - Low Blur > 0 0.399 0.083 0.263 0.535 Inf 1.000 sigma
Low Blur - Clear = 0 -0.126 0.061 -0.250 -0.009 27.9 0.965 sigma
High Blur - Clear > 0 0.368 0.031 0.317 0.419 Inf 1.000 beta
High Blur - Low Blur > 0 0.349 0.032 0.296 0.401 Inf 1.000 beta
Low Blur - Clear = 0 0.019 0.024 -0.028 0.066 444.0 0.998 beta

6.7.2.2 Ex-Gaussian plots

Code
p1<-conditional_effects(fit_exg1, "blur", dpar = "mu")
p2<-conditional_effects(fit_exg1, "blur", dpar = "sigma")
p3<-conditional_effects(fit_exg1, "blur", dpar = "beta")

p1

Code
p2

Code
p3

6.7.3 Write-up

6.7.3.1 Ex-Gaussian

A visualization of how blurring affected processing can be seen Fig. 5. Beginning with the μ parameter, there was greater shifting for high blurred words (vs. clear words), \(b\) = 0.208, 95% Cr.I[0.194, 0.222], ER = , and low blur words, \(b\) = 0.194, 95% Cr.I[0.18, 0.209], ER = . Analyses of the σ and τ parameters yielded a similar pattern.High blurred word had greater variance than clear words, \(b\) = 0.273, 95% Cr.I[0.149, 0.393], ER = 3199, and low blurred words, \(b\) = 0.399, 95% Cr.I[0.263, 0.535], ER = . Finally, there was greater skewing for high blurred words (vs. clear words), \(b\) = 0.368, 95% Cr.I[0.317, 0.419], ER = and for high blur (vs. clear) words, \(b\) = 0.349, 95% Cr.I[0.296, 0.401], ER = . Low blurred words (vs. clear words) only differed on the μ parameter, \(b\) = 0.014, 95% Cr.I[0.008, 0.019], ER = , with greater shifting for low blurred words. For \(\tau\) and \(\sigma\), the 95 Cr.I crossed zero and ER for no difference was greater than 100.

6.7.4 Diffusion modeling

Code
blur_rt_diff<- rts %>%
  group_by(participant) %>%
  dplyr::filter(rt >= .2 & rt <= 2.5)%>%
  dplyr::filter(lex=="m")

head(blur_rt_diff)
# A tibble: 6 × 14
# Groups:   participant [1]
   ...1 participant age   date  string study blur     rt  corr lex   list  bad_1
  <dbl>       <dbl> <chr> <chr> <chr>  <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
1     1      198670 18    2022… POODLE old   LB    0.702     1 m     L3    keep 
2     4      198670 18    2022… CRUST  old   C     0.773     1 m     L3    keep 
3    10      198670 18    2022… GREASE old   LB    0.617     1 m     L3    keep 
4    11      198670 18    2022… SCREW  old   LB    0.647     1 m     L3    keep 
5    14      198670 18    2022… CRADLE old   HB    1.35      1 m     L3    keep 
6    16      198670 18    2022… THROAT old   LB    0.664     1 m     L3    keep 
# ℹ 2 more variables: bad_2 <chr>, bad_3 <chr>
Code
formula <- bf(rt | dec(corr) ~ 0 + blur + 
                (1 + blur|p|participant) + (1+blur|i|string),  
              ndt ~ 0 + blur + (1 + blur|p|participant) + (1+blur|i|string),
              bias =.5)

bprior <- prior(normal(0, 1), class = b) +
  prior(normal(0, 1), class = b, dpar = ndt)+
  prior(normal(0, 1), class = sd) +
  prior(normal(0, 1), class = sd, dpar = ndt) + 
  prior("normal(0, 0.3)", class = "sd", group = "participant")+ 
  prior("normal(0, 0.3)", class = "sd", group = "string")
Code
make_stancode(formula, 
              family = wiener(link_bs = "identity", 
                              link_ndt = "identity",
                              link_bias = "identity"),
              data = blur_rt_diff, 
              prior = bprior)
// generated with brms 2.19.0
functions {
 /* compute correlated group-level effects
  * Args:
  *   z: matrix of unscaled group-level effects
  *   SD: vector of standard deviation parameters
  *   L: cholesky factor correlation matrix
  * Returns:
  *   matrix of scaled group-level effects
  */
  matrix scale_r_cor(matrix z, vector SD, matrix L) {
    // r is stored in another dimension order than z
    return transpose(diag_pre_multiply(SD, L) * z);
  }
  /* Wiener diffusion log-PDF for a single response
   * Args:
   *   y: reaction time data
   *   dec: decision data (0 or 1)
   *   alpha: boundary separation parameter > 0
   *   tau: non-decision time parameter > 0
   *   beta: initial bias parameter in [0, 1]
   *   delta: drift rate parameter
   * Returns:
   *   a scalar to be added to the log posterior
   */
   real wiener_diffusion_lpdf(real y, int dec, real alpha,
                              real tau, real beta, real delta) {
     if (dec == 1) {
       return wiener_lpdf(y | alpha, tau, beta, delta);
     } else {
       return wiener_lpdf(y | alpha, tau, 1 - beta, - delta);
     }
   }
}
data {
  int<lower=1> N;  // total number of observations
  vector[N] Y;  // response variable
  int<lower=0,upper=1> dec[N];  // decisions
  int<lower=1> K;  // number of population-level effects
  matrix[N, K] X;  // population-level design matrix
  int<lower=1> K_ndt;  // number of population-level effects
  matrix[N, K_ndt] X_ndt;  // population-level design matrix
  // data for group-level effects of ID 1
  int<lower=1> N_1;  // number of grouping levels
  int<lower=1> M_1;  // number of coefficients per level
  int<lower=1> J_1[N];  // grouping indicator per observation
  // group-level predictor values
  vector[N] Z_1_1;
  vector[N] Z_1_2;
  vector[N] Z_1_3;
  vector[N] Z_1_ndt_4;
  vector[N] Z_1_ndt_5;
  vector[N] Z_1_ndt_6;
  int<lower=1> NC_1;  // number of group-level correlations
  // data for group-level effects of ID 2
  int<lower=1> N_2;  // number of grouping levels
  int<lower=1> M_2;  // number of coefficients per level
  int<lower=1> J_2[N];  // grouping indicator per observation
  // group-level predictor values
  vector[N] Z_2_1;
  vector[N] Z_2_2;
  vector[N] Z_2_3;
  vector[N] Z_2_ndt_4;
  vector[N] Z_2_ndt_5;
  vector[N] Z_2_ndt_6;
  int<lower=1> NC_2;  // number of group-level correlations
  int prior_only;  // should the likelihood be ignored?
}
transformed data {
  real min_Y = min(Y);
}
parameters {
  vector[K] b;  // population-level effects
  real<lower=0> bs;  // boundary separation parameter
  vector[K_ndt] b_ndt;  // population-level effects
  vector<lower=0>[M_1] sd_1;  // group-level standard deviations
  matrix[M_1, N_1] z_1;  // standardized group-level effects
  cholesky_factor_corr[M_1] L_1;  // cholesky factor of correlation matrix
  vector<lower=0>[M_2] sd_2;  // group-level standard deviations
  matrix[M_2, N_2] z_2;  // standardized group-level effects
  cholesky_factor_corr[M_2] L_2;  // cholesky factor of correlation matrix
}
transformed parameters {
  real bias = 0.5;  // initial bias parameter
  matrix[N_1, M_1] r_1;  // actual group-level effects
  // using vectors speeds up indexing in loops
  vector[N_1] r_1_1;
  vector[N_1] r_1_2;
  vector[N_1] r_1_3;
  vector[N_1] r_1_ndt_4;
  vector[N_1] r_1_ndt_5;
  vector[N_1] r_1_ndt_6;
  matrix[N_2, M_2] r_2;  // actual group-level effects
  // using vectors speeds up indexing in loops
  vector[N_2] r_2_1;
  vector[N_2] r_2_2;
  vector[N_2] r_2_3;
  vector[N_2] r_2_ndt_4;
  vector[N_2] r_2_ndt_5;
  vector[N_2] r_2_ndt_6;
  real lprior = 0;  // prior contributions to the log posterior
  // compute actual group-level effects
  r_1 = scale_r_cor(z_1, sd_1, L_1);
  r_1_1 = r_1[, 1];
  r_1_2 = r_1[, 2];
  r_1_3 = r_1[, 3];
  r_1_ndt_4 = r_1[, 4];
  r_1_ndt_5 = r_1[, 5];
  r_1_ndt_6 = r_1[, 6];
  // compute actual group-level effects
  r_2 = scale_r_cor(z_2, sd_2, L_2);
  r_2_1 = r_2[, 1];
  r_2_2 = r_2[, 2];
  r_2_3 = r_2[, 3];
  r_2_ndt_4 = r_2[, 4];
  r_2_ndt_5 = r_2[, 5];
  r_2_ndt_6 = r_2[, 6];
  lprior += normal_lpdf(b | 0, 1);
  lprior += gamma_lpdf(bs | 1, 1);
  lprior += normal_lpdf(b_ndt | 0, 1);
  lprior += normal_lpdf(sd_1 | 0, 0.3)
    - 6 * normal_lccdf(0 | 0, 0.3);
  lprior += lkj_corr_cholesky_lpdf(L_1 | 1);
  lprior += normal_lpdf(sd_2 | 0, 0.3)
    - 6 * normal_lccdf(0 | 0, 0.3);
  lprior += lkj_corr_cholesky_lpdf(L_2 | 1);
}
model {
  // likelihood including constants
  if (!prior_only) {
    // initialize linear predictor term
    vector[N] mu = rep_vector(0.0, N);
    // initialize linear predictor term
    vector[N] ndt = rep_vector(0.0, N);
    mu += X * b;
    ndt += X_ndt * b_ndt;
    for (n in 1:N) {
      // add more terms to the linear predictor
      mu[n] += r_1_1[J_1[n]] * Z_1_1[n] + r_1_2[J_1[n]] * Z_1_2[n] + r_1_3[J_1[n]] * Z_1_3[n] + r_2_1[J_2[n]] * Z_2_1[n] + r_2_2[J_2[n]] * Z_2_2[n] + r_2_3[J_2[n]] * Z_2_3[n];
    }
    for (n in 1:N) {
      // add more terms to the linear predictor
      ndt[n] += r_1_ndt_4[J_1[n]] * Z_1_ndt_4[n] + r_1_ndt_5[J_1[n]] * Z_1_ndt_5[n] + r_1_ndt_6[J_1[n]] * Z_1_ndt_6[n] + r_2_ndt_4[J_2[n]] * Z_2_ndt_4[n] + r_2_ndt_5[J_2[n]] * Z_2_ndt_5[n] + r_2_ndt_6[J_2[n]] * Z_2_ndt_6[n];
    }
    for (n in 1:N) {
      target += wiener_diffusion_lpdf(Y[n] | dec[n], bs, ndt[n], bias, mu[n]);
    }
  }
  // priors including constants
  target += lprior;
  target += std_normal_lpdf(to_vector(z_1));
  target += std_normal_lpdf(to_vector(z_2));
}
generated quantities {
  // compute group-level correlations
  corr_matrix[M_1] Cor_1 = multiply_lower_tri_self_transpose(L_1);
  vector<lower=-1,upper=1>[NC_1] cor_1;
  // compute group-level correlations
  corr_matrix[M_2] Cor_2 = multiply_lower_tri_self_transpose(L_2);
  vector<lower=-1,upper=1>[NC_2] cor_2;
  // extract upper diagonal of correlation matrix
  for (k in 1:M_1) {
    for (j in 1:(k - 1)) {
      cor_1[choose(k - 1, 2) + j] = Cor_1[j, k];
    }
  }
  // extract upper diagonal of correlation matrix
  for (k in 1:M_2) {
    for (j in 1:(k - 1)) {
      cor_2[choose(k - 1, 2) + j] = Cor_2[j, k];
    }
  }
}
Code
tmp_dat <- make_standata(formula, 
                         family = wiener(link_bs = "identity", 
                              link_ndt = "identity",
                              link_bias = "identity"),
                            data = blur_rt_diff, prior = bprior)
str(tmp_dat, 1, give.attr = FALSE)
List of 29
 $ N         : int 17809
 $ Y         : num [1:17809(1d)] 0.702 0.773 0.617 0.647 1.346 ...
 $ dec       : num [1:17809(1d)] 1 1 1 1 1 1 1 1 1 1 ...
 $ K         : int 3
 $ X         : num [1:17809, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
 $ Z_1_1     : num [1:17809(1d)] 1 1 1 1 1 1 1 1 1 1 ...
 $ Z_1_2     : num [1:17809(1d)] 0 0 0 0 1 0 0 0 1 0 ...
 $ Z_1_3     : num [1:17809(1d)] 1 0 1 1 0 1 0 1 0 1 ...
 $ Z_2_1     : num [1:17809(1d)] 1 1 1 1 1 1 1 1 1 1 ...
 $ Z_2_2     : num [1:17809(1d)] 0 0 0 0 1 0 0 0 1 0 ...
 $ Z_2_3     : num [1:17809(1d)] 1 0 1 1 0 1 0 1 0 1 ...
 $ K_ndt     : int 3
 $ X_ndt     : num [1:17809, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
 $ Z_1_ndt_4 : num [1:17809(1d)] 1 1 1 1 1 1 1 1 1 1 ...
 $ Z_1_ndt_5 : num [1:17809(1d)] 0 0 0 0 1 0 0 0 1 0 ...
 $ Z_1_ndt_6 : num [1:17809(1d)] 1 0 1 1 0 1 0 1 0 1 ...
 $ Z_2_ndt_4 : num [1:17809(1d)] 1 1 1 1 1 1 1 1 1 1 ...
 $ Z_2_ndt_5 : num [1:17809(1d)] 0 0 0 0 1 0 0 0 1 0 ...
 $ Z_2_ndt_6 : num [1:17809(1d)] 1 0 1 1 0 1 0 1 0 1 ...
 $ bias      : num 0.5
 $ J_1       : int [1:17809(1d)] 1 1 1 1 1 1 1 1 1 1 ...
 $ J_2       : int [1:17809(1d)] 119 48 71 129 45 152 109 84 148 154 ...
 $ N_1       : int 216
 $ M_1       : int 6
 $ NC_1      : int 15
 $ N_2       : int 168
 $ M_2       : int 6
 $ NC_2      : int 15
 $ prior_only: int 0
Code
initfun <- function() {
  list(
    b = rnorm(tmp_dat$K),
    bs=.5, 
    b_ndt = runif(tmp_dat$K_ndt, 0.1, 0.15),
    sd_1 = runif(tmp_dat$M_1, 0.5, 1),
    sd_2 = runif(tmp_dat$M_2, 0.5, 1),
    z_1 = matrix(rnorm(tmp_dat$M_1*tmp_dat$N_1, 0, 0.01),
                 tmp_dat$M_1, tmp_dat$N_1),
    z_2 = matrix(rnorm(tmp_dat$M_2*tmp_dat$N_2, 0, 0.01),
                 tmp_dat$M_2, tmp_dat$N_2),
    L_1 = diag(tmp_dat$M_1),
    L_2 = diag(tmp_dat$M_2)
  )
}
Code
fit_wiener1 <- brm(formula, 
                  data = blur_rt_diff,
                  family = wiener(link_bs = "identity", 
                                  link_ndt = "identity",
                                  link_bias = "identity"),
                  prior = bprior, init=initfun,
                  iter = 2000, warmup = 500, 
                  chains = 4, cores = 4,
                  file="weiner_diff_1", 
                  backend = "cmdstanr", threads = threading(4), 
                  control = list(max_treedepth = 15))
Code
fit_wiener <- read_rds("https://osf.io/3j98t/download")

6.7.4.1 Hypotheses

Code
a <- hypothesis(fit_wiener, "blurHB - blurC < 0", dpar="mu")
b <- hypothesis(fit_wiener, "blurHB - blurLB < 0", dpar="mu")
c <- hypothesis(fit_wiener, "blurLB - blurC = 0", dpar="mu")

d <- hypothesis(fit_wiener, "ndt_blurHB - ndt_blurC > 0", dpar="ndt")
e <- hypothesis(fit_wiener, "ndt_blurHB - ndt_blurLB > 0", dpar="ndt")
f <- hypothesis(fit_wiener, "ndt_blurLB - ndt_blurC > 0", dpar="ndt")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis) %>% 
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))

tab %>% 
  mutate(parameter=c("v","v", "v", "T_er", "T_er", "T_er"))%>%
   mutate(Hypothesis = c("High Blur - Clear < 0", "High Blur - Low Blur < 0", "Low Blur - Clear =  0 ", "High Blur - Clear < 0", "High Blur - Low Blur < 0", "Low Blur - Clear >  0 ")) %>% 
  gt(caption=md("Table: Diffusion Model Experiment 1b")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Diffusion Model Experiment 1b
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob parameter
High Blur - Clear < 0 -0.908 0.065 -1.016 -0.799 Inf 1.000 v
High Blur - Low Blur < 0 -0.884 0.072 -1.005 -0.766 Inf 1.000 v
Low Blur - Clear = 0 -0.023 0.055 -0.130 0.086 24.3 0.961 v
High Blur - Clear < 0 0.107 0.005 0.100 0.115 Inf 1.000 T_er
High Blur - Low Blur < 0 0.093 0.005 0.085 0.102 Inf 1.000 T_er
Low Blur - Clear > 0 0.014 0.003 0.009 0.020 Inf 1.000 T_er
Code
me_mu <- conditional_effects(fit_wiener, "blur", dpar = "mu") 

plot(me_mu, plot = FALSE)[[1]] +  labs(x = "Blur", y = "Drift Rate", 
       color = "blur", fill = "blur") +  scale_x_discrete(labels=c('Clear', 'High Blur', 'Low Blur'))

Code
me_mu <- conditional_effects(fit_wiener, "blur", dpar = "ndt") 

plot(me_mu, plot = FALSE)[[1]] +  labs(x = "Blur", y = "Non-Decision Time", 
       color = "blur", fill = "blur") +  scale_x_discrete(labels=c('Clear', 'High Blur', 'Low Blur'))

6.7.5 Write-up

6.7.5.1 Diffusion Model

Looking at drift rate, high blurred words had lower drift rate than clear words, \(b\) = -0.908, 95% Cr.I[-1.016, -0.799], ER = , and low blurred words, \(b\) = -0.884, 95% Cr.I[-1.005, -0.766], ER = . There was no difference in drift rate between Low blurred words and cleared words, \(b\) = -0.023, 95% Cr.I[-0.13, 0.086], ER = 24.322. Non-decision time was higher for high blurred words compared to clear words, \(b\) = 0.107, 95% Cr.I[0.1, 0.115], ER = , and low blurred words, \(b\) = 0.093, 95% Cr.I[0.085, 0.102], ER = . Low blurred words had a higher non-decision time than clear words, \(b\) = 0.014, 95% Cr.I[0.009, 0.02], ER = .

6.7.6 Quantile Plots/Vincentiles

Code
#Delta plots (one per subject) 
quibble <- function(x, q = seq(.1, .9, .2)) {
  tibble(x = quantile(x, q), q = q)
}

data.quantiles <- rts %>%
  dplyr::filter(rt >= .2 | rt <= 2.5) %>% 
  dplyr::group_by(participant,blur,corr) %>%
  dplyr::filter(lex=="m")%>%
  dplyr::summarise(RT = list(quibble(rt, seq(.1, .9, .2)))) %>% 
  tidyr::unnest(RT)


data.delta <- data.quantiles %>%
  dplyr::filter(corr==1) %>%
  dplyr::select(-corr) %>%
  dplyr::group_by(participant, blur, q) %>%
  dplyr::summarize(RT=mean(x))
Code
#Delta plots (based on vincentiles)
vincentiles <- data.quantiles %>%
  dplyr::filter(corr==1) %>%
  dplyr::select(-corr) %>%
  dplyr::group_by(blur,q) %>%
  dplyr::summarize(RT=mean(x)) 

v=vincentiles %>%
  dplyr::group_by(blur,q) %>%
  dplyr::summarise(MRT=mean(RT))

v$blur<- factor(v$blur, level=c("HB", "LB", "C"))


p <- ggplot(v, aes(x = q, y = MRT*1000, colour = blur, group=blur))+
  geom_line(size = 1) +
  geom_point(size = 3) +
  scale_colour_manual(values=met.brewer("Cassatt2", 3)) +
  theme_bw() + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
  scale_y_continuous(breaks=seq(500,1600,100)) +
  theme(legend.title=element_blank())+
    coord_cartesian(ylim = c(500, 1600)) +
  scale_x_continuous(breaks=seq(.1,.9, .2))+
  geom_label_repel(data=v, aes(x=q, y=MRT*1000, label=round(MRT*1000,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5)+
  labs(title = "Quantile Analysis", x = "Quantiles", y = "Response latencies in ms")

p

Code
p2 <- ggplot(data=v,aes(y=MRT, x=fct_relevel(blur, c("HB", "C", "LB")), color=q)) +
  geom_line()+
  geom_point(size=4) + 
  labs(x="blur", y="Reaction Time (ms)")

p2

6.7.7 Delta Plots

6.7.7.1 Clear vs. High Blur

Code
#diff

 v_chb <- v %>%
    dplyr::filter(blur=="C" | blur=="HB") %>%
    dplyr::group_by(q)%>%
     mutate(mean_rt = mean(MRT)*1000) %>%
     ungroup() %>% select(-q) %>%
   tidyr::pivot_wider(names_from = "blur", values_from = "MRT") %>%
    mutate(diff=HB*1000-C*1000)
 
 
   


p3 <- ggplot(v_chb, aes(x = mean_rt, y = diff)) + 
  geom_abline(intercept = 0, slope = 0) +
  geom_line(size = 1, colour = "black") +
  geom_point(size = 3, colour = "black") +
  theme_bw() + 
  theme(legend.position = "none") + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
scale_y_continuous(breaks=seq(110,440,50)) +
    coord_cartesian(ylim = c(110, 440)) +
  scale_x_continuous(breaks=seq(600,1300, 200))+
   geom_label_repel(data=v_chb, aes(y=diff, label=round(diff,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5)+
  labs( title = "Clear - High Blur", x = "Mean RT per quantile", y = "Group differences")

p3

6.7.7.2 Clear vs. Low Blur

Code
 v_clb <- v %>%
    dplyr::filter(blur=="C" | blur=="LB") %>%
    dplyr::group_by(q)%>%
     mutate(mean_rt = mean(MRT)*1000) %>%
     ungroup() %>% 
   select(-q) %>%
   tidyr::pivot_wider(names_from = "blur", values_from = "MRT") %>%
    mutate(diff=LB*1000-C*1000)
 


p4 <- ggplot(v_clb, aes(x = mean_rt, y = diff)) + 
  geom_abline(intercept = 0, slope = 0) +
  geom_line(size = 1, colour = "black") +
  geom_point(size = 3, colour = "black") +
  theme_bw() + 
  theme(legend.position = "none") + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
scale_y_continuous(breaks=seq(10, 70, 10)) +
    coord_cartesian(ylim = c(10, 70)) +
  scale_x_continuous(breaks=seq(500,1150, 200))+
    geom_label_repel(data=v_clb, aes(y=diff, label=round(diff,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5) + 
  labs( title = "Low Blur - Clear", x = "Mean RT per quantile", y = "Group differences")


p4

6.7.7.3 High Blur vs. Low Blur

Code
v_hlb <- v %>%
  dplyr::filter(blur=="HB" | blur=="LB") %>%
  dplyr::group_by(q)%>%
  mutate(mean_rt = mean(MRT)*1000) %>%
     ungroup() %>% 
   select(-q) %>%
  tidyr::pivot_wider(names_from = "blur", values_from = "MRT") %>%
  mutate(diff=HB*1000-LB*1000)


p5 <- ggplot(v_hlb, aes(x = mean_rt, y = diff)) + 
  geom_abline(intercept = 0, slope = 0) +
  geom_line(size = 1, colour = "black") +
  geom_point(size = 3, colour = "black") +
  theme_bw() + 
  theme(legend.position = "none") + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
  scale_x_continuous(breaks=seq(600,1350, 200))+
    geom_label_repel(data=v_hlb, aes(y=diff, label=round(diff,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5)+ 
  labs( title = "High Blur - Low Blur", x = "Mean RT per quantile", y = "Group differences")


p5

6.7.8 Quantile/delta summary plots

Code
bottom <- cowplot::plot_grid(p3, p4,p5, 
                   ncol = 3, 
                   nrow = 1,
                   label_size = 14, 
                   hjust = -0.8, 
                   scale=.95,
                   align = "v")

cowplot::plot_grid(p, bottom, 
                   ncol=1, nrow=2)

6.8 BRM: Conditionalized Memory

  • \(D\prime\)
Code
mem_nc <- read_csv("https://osf.io/jw2gx/download")

head(mem_nc)
# A tibble: 6 × 12
   ...1 participant string blur  date            study     rt  corr lex   sayold
  <dbl>       <dbl> <chr>  <chr> <chr>           <chr>  <dbl> <dbl> <chr>  <dbl>
1     1      198754 ALLEY  C     2022-03-02_16h… m     0.0606     1 y          1
2     2      198754 ARTIST C     2022-03-02_16h… m     0.151      1 y          1
3     3      198754 BAKER  LB    2022-03-02_16h… m     0.209      1 y          1
4     4      198754 BAMBOO HB    2022-03-02_16h… m     0.247      1 y          1
5     5      198754 BANANA C     2022-03-02_16h… m     0.290      1 y          1
6     6      198754 BASKET HB    2022-03-02_16h… m     0.527      1 y          1
# ℹ 2 more variables: condition1 <chr>, isold <dbl>

6.9 Contrast code

Code
## Contrasts
#hypothesis
blurC <-hypr(HB~C, HB~LB, levels=c("C", "HB", "LB"))
blurC

#set contrasts in df 
mem_nc$blur <- as.factor(mem_nc$blur)

contrasts(mem_nc$blur) <-contr.hypothesis(blurC)

mem_nc$isold <- ifelse(mem_nc$isold=="0", "new", "old")

isold <- hypr(new~old, levels=c("new", "old"))

mem_nc$isold <- as.factor(mem_nc$isold)

contrasts(mem_nc$isold) <- contr.hypothesis(mem_nc$isold)

6.10 BRM Model

Code
prior_exp2 <- c(set_prior("cauchy(0,.35)", class = "b"))

fit_mem_noc <- brm(sayold ~ isold*blur + (1+isold*blur|participant) + (1+isold*blur|string), data=mem_nc, 
warmup = 1000,
                    iter = 5000,
                    chains = 4, 
                    init=0, 
                    family = bernoulli(link = "probit"),
                    cores = 4, 
control = list(adapt_delta = 0.9),
prior=prior_exp2, 
sample_prior = T, 
save_pars = save_pars(all=T),
backend="cmdstanr", 
threads = threading(4))

6.10.1 D’, C, and Differences

Code
fit_mem_noc <- read_rds("https://osf.io/2pgnm/download")

#get the lowblur vs. c conrtast
fit_mem_lbc <- read_rds("https://osf.io/tucn9/download")
Code
# (Negative) criteria
emm_m1_c1 <- emmeans(fit_mem_noc, ~blur) %>%
    parameters::parameters(centrality = "mean")

  
emm_m1_c2 <- emmeans(fit_mem_noc, ~blur) %>% 
  contrast("pairwise") %>%
    parameters::parameters(centrality = "mean")

# Dprimes for three groups
emm_m1_d1 <- emmeans(fit_mem_noc, ~isold + blur) %>% 
  contrast("revpairwise", by = "blur") %>%
    parameters::parameters(centrality = "mean")

# Differences between groups
emm_m1_d2 <- emmeans(fit_mem_noc, ~isold + blur) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) %>%
    parameters::parameters(centrality = "mean")

reduce(list(emm_m1_c1, emm_m1_c2, emm_m1_d1, emm_m1_d2), bind_rows) %>% 
  select(c(1:2, 4:5)) %>%
  gt()
Parameter Mean CI_low CI_high
C 0.0410 -0.03885 0.11969
HB 0.1054 0.02843 0.17981
LB 0.0592 -0.01939 0.13462
C - HB -0.0644 -0.10654 -0.02390
C - LB -0.0182 -0.06085 0.02392
HB - LB 0.0462 0.00705 0.08605
old - new, C 1.3470 1.22228 1.47288
old - new, HB 1.4210 1.29320 1.54758
old - new, LB 1.3030 1.18136 1.42704
old - new, C - HB -0.0740 -0.15070 0.00316
old - new, C - LB 0.0439 -0.03559 0.12338
old - new, HB - LB 0.1180 0.04178 0.19658
Code
emm_m1_c1 <- emmeans(fit_mem_noc, ~blur) 

  
emm_m1_c2 <- emmeans(fit_mem_noc, ~blur) %>% 
  contrast("pairwise")

# Dprimes for three groups
emm_m1_d1 <- emmeans(fit_mem_noc, ~isold + blur) %>% 
  contrast("revpairwise", by = "blur")
# Differences between groups
emm_m1_d2 <- emmeans(fit_mem_noc, ~isold + blur) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) 


tmp <- bind_rows(
  bind_rows(
    gather_emmeans_draws(emm_m1_d1) %>% 
      group_by(blur) %>% 
      select(-contrast),
    gather_emmeans_draws(emm_m1_d2) %>% 
      rename(
        blur = blur_pairwise
      ) %>% 
      group_by(blur) %>% 
      select(-isold_revpairwise)
  ),
  bind_rows(
    gather_emmeans_draws(emm_m1_c1),
    gather_emmeans_draws(emm_m1_c2) %>% 
      rename(
        blur = contrast
      )
  ),
  .id = "Parameter"
) %>% 
  mutate(Parameter = factor(Parameter, labels = c("dprime", "Criterion"))) %>% 
  mutate(
    t = if_else(str_detect(blur, " - "), "Differences", "Group means") %>% 
      fct_inorder(),
    blur = fct_inorder(blur)
  )

tmp %>%   
  ungroup() %>% 
  mutate(.value = if_else(Parameter == "Criterion", .value * -1, .value)) %>% 
  mutate(Parameter = fct_rev(Parameter)) %>% 
  ggplot(aes(blur, .value)) +
  labs(
    x = "Blurring Level (or difference)",
    y = "Parameter value"
  ) +
   stat_halfeye(colour="blue") + 
    facet_grid(Parameter~t, scales = "free") + 
  
  geom_hline(yintercept = 0, linewidth = .25) + 
  theme_bw(base_size = 16)

Posterior distributions and 95%CIs of the criterion and dprime parameters, or differences therein, from the conditionalized model
Code
a = hypothesis(fit_mem_noc , "isold1:blur1 > 0")
b= hypothesis(fit_mem_noc , "isold1:blur2 > 0")
c= hypothesis(fit_mem_lbc, "isold1:blur1 = 0")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis) %>%
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))


tab %>% 
  gt(caption=md("Table: Memory Sensitvity Directional Hypotheses Experiment 2")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Memory Sensitvity Directional Hypotheses Experiment 2
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
(isold1:blur1) > 0 0.074 0.039 0.009 0.139 32.68 0.970
(isold1:blur2) > 0 0.118 0.040 0.053 0.183 550.72 0.998
(isold1:blur1) = 0 -0.040 0.039 -0.116 0.038 1.79 0.642

6.10.2 Write-up

6.10.3 Sensitivity

High blur words were better remembered than clear words, $\beta$ = 0.074, 95% Cr.I[0.009, 0.139], 32.684, and low blur words, \(\beta\) = 0.118, 95% Cr.I[0.053, 0.183], (isold1:blur2) > 0, 0.118, 0.04, 0.053, 0.183, 550.724, 0.998, *$\beta$esis$Evid.Ratio. There was weak evidence for no difference between clear and low blurred words, \(\beta\) = -0.04, 95% Cr.I[-0.116, 0.038], ER = 1.793

6.11 Discussion

Our results replicate Experiment 1a with context not reinstated during test. Specifically, during encoding, high blurred words shifted the RT distribution, produced greater skewing, had lower drift rate \(v\), and higher non-decision time \(T_{er}\). For low blurred words, one difference worth mentioning is that there seems to be increasing differences (although much smaller) compared to clear words. Looking at the quantile plots we do see a small increase at the trailing edge of the distribution that could explain this.

Critically, during the test phase, high blurred words better recognition performance than clear and low blurred words.

Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., & Lindeløv, J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1), 195–203. https://doi.org/10.3758/s13428-018-01193-y