5  Experiment 3: Semantic Categorization Word Frequency and Disfluency

In Experiments 1a and 1b, we employed mathematical and computational techniques to study the impact of blurring on encoding and recognition memory. High blurred words influenced both early and late stages evident by increased distributional shifting and skewing, lower \(v\), and higher \(T_{er}\). Low blurred words (compared to clear words), on the other hand, only impacted an early phase, indicated by increased distributional shifting and higher \(T_{er}\). In terms of recognition memory, sensitivity was higher for high blurred words than clear and low blurred words. This implies two facets to the disfluency effect: an early, automatic/non-analytic component, and a subsequent, analytic component. The locus of this later component remains ambiguous. The mnemonic benefit for recognizing high blurred words might arise from enhanced top-down (lexical or semantic processing) processing which offsets the challenge of reading blurred text. Alternatively, the benefit might stem from increased attention or control processes operating along side processes needed to recognize the word.

One way to test more directly the accounts of perceptual disfluency would be to identify an aspect of higher level information that plays a role in word perception and to examine its impact on the disfluency effect. Several models of word recognition assume the speed and ease of word identification varies as a function of word frequency (Coltheart, 1978; Forster & Chambers, 1973; McClelland & Rumelhart, 1981). Looking at RT distributions for word frequency effects has shown both an early and late locus, showing larger distribution shifts and more skewing for low frequency words (Andrews & Heathcote, 2001; Balota & Spieler, 1999; Plourde & Besner, 1997; Staub, 2010; Yap & Balota, 2007; but see Gomez & Perea, 2014 for a DDM account). In regards to memory, low frequency words are generally better recognized than high frequency word in recognition memory(Glanzer & Adams, 1985). The recognition advantage for less frequent words has been ascribed to the additional cognitive effort or attenton required to process them (Diana & Reder, 2006; but see Pazzaglia et al., 2014). This has been called the elevated attention hypothesis (Malmberg & Nelson, 2003).

In tasks like semantic categorization and pronunciation, the interaction between word frequency and stimulus degradation (in this case, perceptual disfluency) leads to an overadditive effect (Yap2008?). By the logic of additive factors (Sternberg, 1969), if factors do interact, they are believed to be associated with similar processing stages. The interplay between perceptual disfluency and word frequency originates from perceptual disfluency hindering initial processing and word identification. This leads to a magnification of the word frequency effect. Perceptual disfluencies like handwritten cursive (Barnhart & Goldinger, 2010; Perea et al., 2016) and research on letter rotation in words (Fernández-López et al., 2022) have shown a magnification of the word frequency effect.

In Experiment 2, we manipulated word frequency (high vs. low frequency words) and perceptual blur (i.e., clear, low, and high) within a semantic categorization task. Mirroring Experiments 1a and 1b, the categorization task preceded a surprise memory recognition test. Our goal here was to evaluate the compensatory processing and stage-specific theories as both theories offer predictions about memory performance.

In this experiment, we opted to forgo using the DDM. Instead we focus on ex-Gaussian parameters during encoding. Both the compensatory processing and stage specific accounts predict an interaction of word frequency and blurring on \(\mu\) and \(\tau\) where the word frequency effect is largest for high blurred words. Where both accounts differ is in terms of memory performance.

The compensatory processing account predicts items receiving the most top-down processing during encoding should be better remembered. On a recognition test, this account would predict items low in frequency should show a disfluency effect due to low frequency items receiving more top-down processing during encoding.

The stage specific account, on the other hand, is influenced by extra attentional or control processes taking place during and after word recognition. Here, memory performance relies not only on the type of processing during encoding but also on limited-capacity resources such as cognitive control.

Low frequency words, like high blurred words, routinely attract attention during encoding [see evidence from pupillometry; Kuchinke et al. (2007)]. Thus the benefits of perceptual disfluency may be less effective for these items. High frequency items on the other hand should be more likely to benefit from a manipulation that enhances attention to, and encoding of, the item. The presence of low frequency words could make perceptual disfluency redundant in this regard. In this regard, we might not see a perceptual disfluency effect for low frequency items. Ptok et al. (2019) argued that the memory benefits for conflict encoding phenomena are limited to tasks that are relatively fluent, automatic, and encoding poor. Any additional demands placed on the participants imposed by the task could reduce the disfluency effect. As evidence for this, Ptok et al. (2020) showed manipulating endogenous attention (by using a chinrest) eliminated the memory benefit from semantic interference during encoding. Similarly, Geller & Peterson (2021) manipulated attention through a test expectancy manipulation (i.e., being told about an upcoming memory test or not) which presumably oriented participants to study all words for the upcoming memory test, regardless of disfluency. This resulted in the elimination of the disfluency effect. Lastly, Westerman & Greene (1997) (Experiment 3), showed, with a masking manipulation, that changing encoding instructions from reading the target word (more automatic) to spelling the target word eliminated the perceptual disfluency effect. In Experiment 2, we examine how word frequency interacts with perceptual disfluency and how this affects memory.

5.1 Set-up

Below are the packages you should install to ensure this document runs properly.

Code
#load packages
library(plyr)
library(easystats)
library(tidyverse)
library(knitr)
library(ggeffects)
library(here)
library(data.table)
library(ggrepel)
library(gt)
library(brms)
library(ggdist)
library(emmeans)
library(tidylog)
library(tidybayes)
library(hypr)
library(cowplot)
library(tidyverse)
library(colorspace)
library(ragg)
library(cowplot)
library(ggtext)
library(MetBrewer)
library(ggdist)
library(modelbased)
library(flextable)
library(cmdstanr)
library(brms)
library(Rfssa)
library(easystats)
library(knitr)

options(digits = 3)
options(timeout=200)

options(set.seed(666))

5.2 Figure Theme

Code
bold <- element_text(face = "bold", color = "black", size = 16) #axis bold
theme_set(theme_minimal(base_size = 15, base_family = "Arial"))
theme_update(
  panel.grid.major = element_line(color = "grey92", size = .4),
  panel.grid.minor = element_blank(),
  axis.title.x = element_text(color = "grey30", margin = margin(t = 7)),
  axis.title.y = element_text(color = "grey30", margin = margin(r = 7)),
  axis.text = element_text(color = "grey50"),
  axis.ticks =  element_line(color = "grey92", size = .4),
  axis.ticks.length = unit(.6, "lines"),
  legend.position = "top",
  plot.title = element_text(hjust = 0, color = "black", 
                            family = "Arial",
                            size = 21, margin = margin(t = 10, b = 35)),
  plot.subtitle = element_text(hjust = 0, face = "bold", color = "grey30",
                               family = "Arial", 
                               size = 14, margin = margin(0, 0, 25, 0)),
  plot.title.position = "plot",
  plot.caption = element_text(color = "grey50", size = 10, hjust = 1,
                              family = "Arial", 
                              lineheight = 1.05, margin = margin(30, 0, 0, 0)),
  plot.caption.position = "plot", 
  plot.margin = margin(rep(20, 4))
)
pal <- c(met.brewer("Veronese", 3))
Code
## flat violinplots
### It relies largely on code previously written by David Robinson 
### (https://gist.github.com/dgrtwo/eb7750e74997891d7c20) and ggplot2 by H Wickham
#check if required packages are installed
#Load packages
# Defining the geom_flat_violin function. Note: the below code modifies the 
# existing github page by removing a parenthesis in line 50

geom_flat_violin <- function(mapping = NULL, data = NULL, stat = "ydensity",
                             position = "dodge", trim = TRUE, scale = "area",
                             show.legend = NA, inherit.aes = TRUE, ...) {
  layer(
    data = data,
    mapping = mapping,
    stat = stat,
    geom = GeomFlatViolin,
    position = position,
    show.legend = show.legend,
    inherit.aes = inherit.aes,
    params = list(
      trim = trim,
      scale = scale,
      ...
    )
  )
}
# horizontal nudge position adjustment
# copied from https://github.com/tidyverse/ggplot2/issues/2733
position_hnudge <- function(x = 0) {
  ggproto(NULL, PositionHNudge, x = x)
}
PositionHNudge <- ggproto("PositionHNudge", Position,
                          x = 0,
                          required_aes = "x",
                          setup_params = function(self, data) {
                            list(x = self$x)
                          },
                          compute_layer = function(data, params, panel) {
                            transform_position(data, function(x) x + params$x)
                          }
)

5.3 Method

This study was preregistered https://osf.io/kjq3t. All raw and summary data, materials, and R scripts for pre-processing, analysis, and plotting for Experiment 3 can be found at our OSF page: https://osf.io/6sy7k/.

5.4 Participants

5.5 Materials

Non-animal and animal words were adapted from Fernández-López et al. (2022). To make the experiment more feasible for online participants and to evenly split our conditions, we winnowed their non-animal words and presented 90 (1/2 HF and 1/2 LF) non-animal words and 45 animal words during study. This kept the 2:1 ratio used in previous experiments (e.g. (Fernández-López et al., 2022; Perea et al., 2018). At test, 90 non-animal words we did not use during the semantic categorization task were used as new words for the recognition test. We created six counterbalanced lists to ensure that each non-animal word was presented as both old and new and as clear, high blurred, and low blurred across participants. Similar to non-words from Experiments 1a and 1b, we excluded animal words from analysis.

The number of letters of the animal words (M = 5.3; range: 3-9) was similar to that of the non-animal words (high-frequency words: M = 5.3, range: 3-8; low-frequency words: M = 5.3, range: 3-9). The animal words had an ample range of word-frequency in the SUBTLEX database (M = 11.84 per million; range: 0.61-192.84).

5.6 Procedure

We used the same procedure as Experiments 1b. The main difference is that instead of making a word/non-word decision, participants made a semantic categorization judgement (i.e., animal/not animal).

5.7 Results

5.7.1 Accuracy

Code
rts_wf <- read_csv("https://osf.io/29hnd/download")

head(rts_wf)
# A tibble: 6 × 15
   ...1 participant      date    age target study blur  frequency category    rt
  <dbl> <chr>            <chr> <dbl> <chr>  <chr> <chr> <chr>     <chr>    <dbl>
1     1 54847f1cfdf99b0… 2023…    50 SWEEP  old   C     LOW       NONAN     3.29
2     2 54847f1cfdf99b0… 2023…    50 STATUE old   HB    LOW       NONAN     4.84
3     3 54847f1cfdf99b0… 2023…    50 WAITR… old   LB    LOW       NONAN     2.44
4     4 54847f1cfdf99b0… 2023…    50 TRUE   old   HB    HIGH      NONAN     2.43
5     5 54847f1cfdf99b0… 2023…    50 START  old   HB    HIGH      NONAN     2.43
6     6 54847f1cfdf99b0… 2023…    50 PLEAD  old   LB    LOW       NONAN     1.74
# ℹ 5 more variables: corr <dbl>, List <dbl>, bad_1 <chr>, bad_2 <chr>,
#   bad_3 <chr>

5.8 BRMs

5.8.1 Accuracy

Code
rts_dim <- rts_wf %>%
  filter(category=="NONAN")

blur_acc_wf<- rts_wf %>%
  group_by(participant) %>%
  dplyr::filter(rt >= .2 & rt <= 2.5)%>%
  dplyr::filter(category=="NONAN")

head(blur_acc_wf)
# A tibble: 6 × 15
# Groups:   participant [1]
   ...1 participant      date    age target study blur  frequency category    rt
  <dbl> <chr>            <chr> <dbl> <chr>  <chr> <chr> <chr>     <chr>    <dbl>
1     3 54847f1cfdf99b0… 2023…    50 WAITR… old   LB    LOW       NONAN     2.44
2     4 54847f1cfdf99b0… 2023…    50 TRUE   old   HB    HIGH      NONAN     2.43
3     5 54847f1cfdf99b0… 2023…    50 START  old   HB    HIGH      NONAN     2.43
4     6 54847f1cfdf99b0… 2023…    50 PLEAD  old   LB    LOW       NONAN     1.74
5     7 54847f1cfdf99b0… 2023…    50 FOREV… old   C     HIGH      NONAN     1.64
6     9 54847f1cfdf99b0… 2023…    50 SNEEZE old   C     LOW       NONAN     1.63
# ℹ 5 more variables: corr <dbl>, List <dbl>, bad_1 <chr>, bad_2 <chr>,
#   bad_3 <chr>
Code
dim(blur_acc_wf)
[1] 38526    15
Code
dim(rts_dim)
[1] 38880    15

We started with 38880. After we removed RTs below .2 and above 2.5 (0.009)we were left with 38526 data points.

Code
## Contrasts
#hypothesis
blurC <-hypr(HB~C, HB~LB, levels=c("C", "HB", "LB"))
blurC
hypr object containing 2 null hypotheses:
H0.1: 0 = HB - C
H0.2: 0 = HB - LB

Call:
hypr(~HB - C, ~HB - LB, levels = c("C", "HB", "LB"))

Hypothesis matrix (transposed):
   [,1] [,2]
C  -1    0  
HB  1    1  
LB  0   -1  

Contrast matrix:
   [,1] [,2]
C  -2/3  1/3
HB  1/3  1/3
LB  1/3 -2/3
Code
#set contrasts in df 
blur_acc_wf$blur <- as.factor(blur_acc_wf$blur)

contrasts(blur_acc_wf$blur) <-contr.hypothesis(blurC)


freqc <- hypr(HIGH~LOW,levels=c("HIGH", "LOW"))
freqc
hypr object containing one (1) null hypothesis:
H0.1: 0 = HIGH - LOW

Call:
hypr(~HIGH - LOW, levels = c("HIGH", "LOW"))

Hypothesis matrix (transposed):
     [,1]
HIGH  1  
LOW  -1  

Contrast matrix:
     [,1]
HIGH  1/2
LOW  -1/2
Code
blur_acc_wf$frequency<- as.factor(blur_acc_wf$frequency)

contrasts(blur_acc_wf$frequency) <-contr.hypothesis(freqc)

5.8.2 Model

Code
prior_expsc <- c(set_prior("cauchy(0,.35)", class = "b"))

fit_acc_wf <- brm(corr ~ blur*frequency + (1+blur*frequency|participant) + (1+blur*frequency|target), data=blur_acc_wf, 
warmup = 1000,
                    iter = 5000,
                    chains = 4, 
                    init=0, 
                    family = bernoulli(),
     cores = 4, 

prior=prior_expsc,
sample_prior = T, 
save_pars = save_pars(all=T),
control = list(adapt_delta = 0.9), 
file="acc_blmm_sc", 
backend="cmdstanr", 
threads = threading(4))
Code
# get file from osf
tmp <- tempdir()
download.file("https://osf.io/5u7p8/download", file.path(tmp, "acc_blmm_sc.RData"))
load(file.path(tmp, "acc_blmm_sc.RData"))

fit_acc_sc_lb <- read_rds("https://osf.io/ehjxq/download")

5.9 Model Summary

5.9.1 Hypotheses

Code
# Dprimes for three groups
emm_acc <- emmeans(fit_acc_sc, ~frequency + blur, type="response") %>% 
  parameters::parameters(centrality = "mean")


emm_acc
Parameter | Mean | Mean.1 |       95% CI |   pd
-----------------------------------------------
HIGH, C   | 1.00 |   1.00 | [1.00, 1.00] | 100%
LOW, C    | 1.00 |   1.00 | [1.00, 1.00] | 100%
HIGH, HB  | 0.99 |   0.99 | [0.99, 0.99] | 100%
LOW, HB   | 0.99 |   0.99 | [0.99, 0.99] | 100%
HIGH, LB  | 1.00 |   1.00 | [0.99, 1.00] | 100%
LOW, LB   | 1.00 |   1.00 | [0.99, 1.00] | 100%
Code
a = hypothesis(fit_acc_sc , "blur1 < 0")

b = hypothesis(fit_acc_sc , "blur2 < 0")

c = hypothesis(fit_acc_sc_lb, "blur1 < 0")

d= hypothesis(fit_acc_sc, "frequency1 = 0")

e = hypothesis(fit_acc_sc, "blur1:frequency1 = 0")

f = hypothesis(fit_acc_sc , "blur2:frequency1 = 0")

g = hypothesis(fit_acc_sc_lb, "blur1:frequency1 = 0")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis) %>%
  mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))

tab %>% 
   mutate(Hypothesis = c("High Blur - Clear < 0", "High Blur-Low Blur < 0", "Low Blur - Clear = 0","Low Frequency - High Frequency",  "(High Blur-Clear) - (Low Frequency-High Frequency) < 0", "(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0", "(Low Blur-Clear) - (Low Frequency-High Frequency) =  0")) %>% 
  gt(caption=md("Table: Experiment 3 Accuracy")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Experiment 3 Accuracy
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear < 0 -1.128 0.271 -1.590 -0.689 Inf 1.000
High Blur-Low Blur < 0 -0.764 0.238 -1.154 -0.374 1999.000 1.000
Low Blur - Clear = 0 -0.072 0.123 -0.273 0.130 2.638 0.725
Low Frequency - High Frequency 0.132 0.195 -0.235 0.528 1.166 0.538
(High Blur-Clear) - (Low Frequency-High Frequency) < 0 0.017 0.242 -0.462 0.514 1.156 0.536
(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0 -0.002 0.234 -0.466 0.463 1.236 0.553
(Low Blur-Clear) - (Low Frequency-High Frequency) = 0 0.088 0.215 -0.337 0.508 0.597 0.374

5.9.2 Accuracy Summary

There was no frequency effect, $b$ = 0.132, 95% Cr.I[-0.235, 0.528], ER = , although the evidence for a lack of a difference is ambiguous.

Turning to blurring, clear words were better identified than high blur words (\(M\) = .963), $b$ = -1.128, 95% Cr.I[-1.59, -0.689], ER = . Low blurred words were better identified than high blurred words, $b$ = -0.764, 95% Cr.I[-1.154, -0.374], ER = 1999. There was weak evidence for a difference between clear and low blurred words, $b$ = -0.072, 95% Cr.I[-0.273, 0.13], ER = 2.638.

There were no interactions between blurring and word frequency–all 95% Cr.I included 0; however, evidence for this lack of difference was ambiguous (ER = ~1).

5.10 RTs

Code
p_rt_filter <- rts_wf %>%
  filter(corr==1, category=="NONAN")


p_rt_out <- p_rt_filter %>% 
 filter(rt >= .2 & rt <= 2.5) %>%
  ungroup()

p_rt <- p_rt_filter %>%
   filter(rt >= .2 & rt <= 2.5) %>%
  group_by(frequency, blur) %>%
  dplyr::summarise(rt=mean(rt)) %>%
  dplyr::mutate(rt_ms=rt*1000) %>%
  select(-rt)

# table for the effect
p_rt %>% group_by(blur) %>%
  pivot_wider(names_from=frequency, values_from=rt_ms) %>%
  mutate(Freq_Effect=round(LOW-HIGH)) %>%
  flextable()  

blur

HIGH

LOW

Freq_Effect

C

705

723

18

HB

961

1,016

55

LB

718

730

13

We had 38152 correct RT trials for non-animal responses. After removing RTs below .2 and above 2.5 we were left with 37823 trials.

Code
## Contrasts
#hypothesis
#set contrasts in df 
p_rt_filter$blur <- as.factor(p_rt_filter$blur)

contrasts(p_rt_filter$blur) <-contr.hypothesis(blurC)


freqc <- hypr(HIGH~LOW,levels=c("HIGH", "LOW"))
freqc
hypr object containing one (1) null hypothesis:
H0.1: 0 = HIGH - LOW

Call:
hypr(~HIGH - LOW, levels = c("HIGH", "LOW"))

Hypothesis matrix (transposed):
     [,1]
HIGH  1  
LOW  -1  

Contrast matrix:
     [,1]
HIGH  1/2
LOW  -1/2
Code
p_rt_filter$frequency<- as.factor(p_rt_filter$frequency)

contrasts(p_rt_filter$frequency) <-contr.hypothesis(freqc)

5.10.1 Ex-Gaussian

5.10.1.1 Model Set-up

Code
library(cmdstanr)
#max model
bform_exg1 <- bf(
rt ~ blur*frequency + (1 + blur*frequency |p| participant) + (1 + blur|i| Target),
sigma ~ blur*frequency + (1 + blur*frequency |p|participant) + (1 + blur |i| Target),
beta ~ blur*frequency  + (1 + blur*frequency |p|participant) + (1 + blur |i| Target))

5.10.1.2 Run model

Code
#|
prior_exp1 <- c(set_prior("normal(0,100)", class = "b", coef=""))

fit_exg1 <- brm(
bform_exg1, data = blur_rt_sc,
warmup = 1000,
                    iter = 5000,
                    chains = 4,
                    family = exgaussian(),
                    init = 0,
                    cores = 4,
control = list(adapt_delta = 0.8), 
backend="cmdstanr", 
file = "blmm_sc_wf",
threads = threading(4))
Code
fit_sc <- read_rds("https://osf.io/kdv38/download")
fit_sc_lc <- read_rds("https://osf.io/49bgx/download")

5.11 Model summary

Code
a = hypothesis(fit_sc , "blur1 > 0", dpar="mu")

b = hypothesis(fit_sc , "blur2 >  0", dpar="mu")

c = hypothesis(fit_sc_lc, "blur1 = 0", dpar="mu")

d= hypothesis(fit_sc, "frequency1 > 0", dpar="mu")

e = hypothesis(fit_sc, "blur1:frequency1 < 0 ", dpar="mu")

f = hypothesis(fit_sc , "blur2:frequency1 < 0 ", dpar="mu")

g = hypothesis(fit_sc_lc, "blur1:frequency1 = 0", dpar="mu")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis) %>%
  mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))

tab %>% 
   mutate(Hypothesis = c("High Blur - Clear > 0", "High Blur-Low Blur > 0", "Low Blur - Clear = 0","Low Frequency - High Frequency",  "(High Blur-Clear) - (Low Frequency-High Frequency) < 0", "(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0", "(Low Blur-Clear) - (Low Frequency-High Frequency) =  0")) %>% 
  gt(caption=md("Table: Experiment 3 Memory Mu")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Experiment 3 Memory Mu
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear > 0 0.274 0.007 0.262 0.286 Inf 1.000
High Blur-Low Blur > 0 0.265 0.007 0.253 0.277 Inf 1.000
Low Blur - Clear = 0 0.002 0.006 -0.010 0.015 1500 0.999
Low Frequency - High Frequency -0.026 0.006 -0.035 -0.016 0 0.000
(High Blur-Clear) - (Low Frequency-High Frequency) < 0 -0.029 0.012 -0.050 -0.009 96 0.990
(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0 -0.032 0.012 -0.052 -0.012 181 0.995
(Low Blur-Clear) - (Low Frequency-High Frequency) = 0 -0.017 0.009 -0.036 0.000 170 0.994

5.11.1 Mu

High blurred words had greater shifting than clear words, $b$ = 0.274, 95% Cr.I[0.262, 0.286], ER = , and low blurred words, $b$ = 0.265, 95% Cr.I[0.253, 0.277], ER = . There was no difference in amount of shifting between low blurred words and clear words, $b$ = 0.002, 95% Cr.I[-0.01, 0.015], ER = 1500.042. For word frequency, there was greater shifting for low frequency compared to high frequency words, $b$ = -0.026, 95% Cr.I[-0.035, -0.016], ER = 0. In terms of the interaction between frequency and blurring, there was an amplified word frequency effect for high blurred words compared to clear words, $b$ = -0.029, 95% Cr.I[-0.05, -0.009], ER = 95.97 and low blurred words, $b$ = -0.032, 95% Cr.I[-0.052, -0.012], ER = 180.818. There was strong evidence that there was no amplification of the word frequency effect for the low blurred vs. clear comparison, $b$ = -0.017, 95% Cr.I[-0.036, -2.118^{-4}], ER = 170.388.

Code
a = hypothesis(fit_sc , "sigma_blur1 > 0")

b = hypothesis(fit_sc , "sigma_blur2 > 0")

c = hypothesis(fit_sc_lc, "sigma_blur1 > 0")

d= hypothesis(fit_sc, "sigma_frequency1 < 0")

e = hypothesis(fit_sc, "sigma_blur1:frequency1 < 0")

f = hypothesis(fit_sc , "sigma_blur2:frequency1 > 0")

g = hypothesis(fit_sc_lc, "sigma_blur1:frequency1 > 0")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis) %>%
  mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))

tab %>% 
   mutate(Hypothesis = c("High Blur - Clear > 0", "High Blur-Low Blur > 0", "Low Blur - Clear = 0","Low Frequency - High Frequency",  "(High Blur-Clear) - (Low Frequency-High Frequency) < 0", "(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0", "(Low Blur-Clear) - (Low Frequency-High Frequency) =  0")) %>% 
  gt(caption=md("Table: Experiment 3 Ex-Gaussian Sigma")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Experiment 3 Ex-Gaussian Sigma
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear > 0 0.688 0.049 0.608 0.769 Inf 1.000
High Blur-Low Blur > 0 0.707 0.051 0.623 0.791 Inf 1.000
Low Blur - Clear = 0 0.012 0.035 -0.045 0.070 1.700 0.630
Low Frequency - High Frequency -0.076 0.037 -0.136 -0.016 50.282 0.981
(High Blur-Clear) - (Low Frequency-High Frequency) < 0 0.074 0.082 -0.060 0.210 0.222 0.182
(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0 0.119 0.083 -0.018 0.256 12.051 0.923
(Low Blur-Clear) - (Low Frequency-High Frequency) = 0 0.015 0.064 -0.089 0.121 1.462 0.594

5.11.2 Sigma

Low frequency words showed greater variance than high frequency words, $b$ = -0.076, 95% Cr.I[-0.136, -0.016], ER = 50.282.

High blur words had higher \(\sigma\) compared to clear, $b$ = 0.688, 95% Cr.I[0.608, 0.769], ER = , and low blurred words, $b$ = 0.707, 95% Cr.I[0.623, 0.791], ER = . There was weak evidence that low blurred words having greater variance than clear words, $b$ = 0.012, 95% Cr.I[-0.136, 0.07], ER = 1.7. There were no significant interactions–all Cr.I included 0.

Code
a = hypothesis(fit_sc , "beta_blur1 > 0", dpar="beta")

b = hypothesis(fit_sc , "beta_blur2 > 0", dpar="beta")

c = hypothesis(fit_sc_lc, "beta_blur1 < 0", dpar="beta")

d= hypothesis(fit_sc, "beta_frequency1 < 0", dpar="beta")

e = hypothesis(fit_sc, "beta_blur1:frequency1 < 0", dpar="beta")

f = hypothesis(fit_sc , "beta_blur2:frequency1 < 0", dpar="beta")

g = hypothesis(fit_sc_lc, "beta_blur1:frequency1 < 0", dpar="beta")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis) %>% 
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))


tab %>% 
  mutate(Hypothesis = c("High Blur - Clear > 0", "High Blur-Low Blur > 0", "Low Blur - Clear = 0","Low Frequency - High Frequency",  "(High Blur-Clear) - (Low Frequency-High Frequency) < 0", "(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0", "(Low Blur-Clear) - (Low Frequency-High Frequency) <  0")) %>% 
  gt(caption=md("Table: Experiment 3 Ex-Gaussian Beta")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Experiment 3 Ex-Gaussian Beta
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear > 0 0.531 0.027 0.487 0.575 Inf 1.000
High Blur-Low Blur > 0 0.549 0.027 0.504 0.593 Inf 1.000
Low Blur - Clear = 0 -0.038 0.036 -0.097 0.020 6.4 0.865
Low Frequency - High Frequency -0.029 0.021 -0.062 0.005 11.0 0.917
(High Blur-Clear) - (Low Frequency-High Frequency) < 0 -0.077 0.042 -0.146 -0.008 30.8 0.969
(High Blur-Low Blur) - (Low Frequency-High Frequency) < 0 -0.143 0.043 -0.213 -0.073 2665.7 1.000
(Low Blur-Clear) - (Low Frequency-High Frequency) < 0 -0.098 0.050 -0.184 -0.022 57.8 0.983

5.11.3 Beta

Low frequency words showed greater skewing than high frequency words, $b$ = -0.029, 95% Cr.I[-0.062, 0.005], ER = 11.012.

High blurred words showed greater skewing than clear words, $b$ = 0.531, 95% Cr.I[0.487, 0.575], ER = , and low blurred words, $b$ = 0.549, 95% Cr.I[0.504, 0.593], ER = . There was strong evidence for no skewing difference between low blurred words and clear words, b = -0.038, 95% Cr.I[-0.097, 0.02], ER = 6.401.

The word frequency effect was magnified for high blurred words compared to clear, $b$ = -0.077, 95% Cr.I[-0.146, -0.008], ER = 30.809, and low blur words, $b$ = -0.143, 95% Cr.I[-0.213, -0.073], ER = 2665.667 There was also an interaction for the low blurred vs. clear words comparison, $b$ = -0.098, 95% Cr.I[-0.184, -0.022], ER = 57.824. However, the word frequency was reversed here with low blurred-high frequency words having greater skewing than low blurred-low frequency words.

5.11.4 Ex-Gaussian conditional plots

Code
p1<-conditional_effects(fit_sc, terms=c("blur","freq"),  dpar = "mu")
p2<-conditional_effects(fit_sc, "blur", dpar = "sigma")
p3<-conditional_effects(fit_sc, terms=c("blur","freq"), dpar = "beta")

p1

Code
p2

Code
p3

5.12 Quantile Plots/Vincentiles

::: {.panel-tabset}

5.12.1 Figure 1

Code
#Delta plots (one per subject)
quibble <- function(x, q = seq(.1, .9, .2)) {
  tibble(x = quantile(x, q), q = q)
}

data.quantiles <- p_rt_out %>%
  dplyr::group_by(participant,blur,frequency, corr) %>%
  dplyr::mutate(rt_ms = rt*1000) %>% 
  dplyr::summarise(RT = list(quibble(rt_ms, seq(.1, .9, .2)))) %>% 
  tidyr::unnest(RT) %>%
  ungroup()


data.delta <- data.quantiles %>%
  dplyr::group_by(participant, blur,frequency,  q) %>%
  dplyr::summarize(RT=mean(x)) %>%
  ungroup()
Code
#Delta plots (based on vincentiles)
vincentiles <- data.quantiles %>%
  dplyr::group_by(blur,frequency, q) %>%
  dplyr::summarize(RT=mean(x)) %>%
  ungroup()

v=vincentiles %>%
  dplyr::group_by(blur,frequency, q) %>%
  dplyr::summarise(MRT=mean(RT)) %>%
  ungroup()


p <- ggplot(v, aes(x = q, y = MRT, colour=blur)) +
  facet_grid(~frequency) + 
  geom_line(size = 1) +
  geom_point(size = 3) +
  scale_colour_manual(values=met.brewer("Cassatt2", 3)) +
  theme_bw() + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
  scale_y_continuous(breaks=seq(500,1600,100)) +
  theme(legend.title=element_blank())+
    coord_cartesian(ylim = c(500, 1600)) +
  scale_x_continuous(breaks=seq(.1,.9, .2))+
    geom_label_repel(data=v, aes(x=q, y=MRT, label=round(MRT,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5)+
  labs(title = "Quantile Analysis", x = "Quantiles", y = "Response latencies in ms")

p

Code
p1 <- ggplot(v, aes(x = q, y = MRT, colour=frequency)) +
  facet_grid(~blur) + 
  geom_line(size = 1) +
  geom_point(size = 3) +
  scale_colour_manual(values=met.brewer("Cassatt2", 3)) +
  theme_bw() + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
  scale_y_continuous(breaks=seq(600,1600,100)) +
  theme(legend.title=element_blank())+
    coord_cartesian(ylim = c(600, 1600)) +
  scale_x_continuous(breaks=seq(.1,.9, .2))+
   geom_label_repel(data=v, aes(y=MRT, label=round(MRT,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5)+
  labs(title = "Quantile Analysis", x = "Quantiles", y = "Response latencies in ms")

p1

5.12.2 Figure 2

Code
p2 <- ggplot(data=v,aes(y=MRT, x=frequency, color=q)) + facet_grid(~blur)+
  geom_line()+
  geom_point(size=4) + 
  ggeasy::easy_add_legend_title("Quantiles")

p2

5.12.3 Delta Plots

Code
#diff
v_wf <- v %>%
  dplyr::group_by(blur, q)%>%
  tidyr::pivot_wider(names_from = "frequency", values_from = "MRT") %>%
  mutate(diff=LOW-HIGH) %>%
  ungroup()

v_wf %>% select(blur, q, diff) %>% pivot_wider(names_from="q", values_from="diff") %>% flextable()

blur

0.1

0.3

0.5

0.7

0.9

C

12.10

17.2

18.6

19.2

25.1

HB

31.46

41.8

48.7

67.2

98.4

LB

9.92

12.1

12.0

13.6

17.7

Code
v_chb <- v %>%
  dplyr::filter(blur=="C" | blur=="HB") %>%
  dplyr::group_by(frequency, q)%>%
  mutate(mean_rt = mean(MRT)) %>% 
  ungroup()%>% 
  tidyr::pivot_wider(names_from = "blur", values_from = "MRT") %>%
  mutate(diff=HB-C)
v_chb
# A tibble: 10 × 6
   frequency     q mean_rt     C    HB  diff
   <chr>     <dbl>   <dbl> <dbl> <dbl> <dbl>
 1 HIGH        0.1    653.  575.  731.  157.
 2 HIGH        0.3    722.  623.  821.  198.
 3 HIGH        0.5    792.  669.  915.  247.
 4 HIGH        0.7    881.  731. 1031.  300.
 5 HIGH        0.9   1060.  868. 1253.  386.
 6 LOW         0.1    675.  587.  763.  176.
 7 LOW         0.3    751.  640.  863.  223.
 8 LOW         0.5    826.  687.  964.  277.
 9 LOW         0.7    925.  751. 1099.  348.
10 LOW         0.9   1122.  893. 1352.  459.
Code
v_chb_freq <- v %>%
  dplyr::group_by(blur, q)%>%
  mutate(mean_rt = mean(MRT)) %>% 
  ungroup()%>% 
  tidyr::pivot_wider(names_from = "frequency", values_from = "MRT") %>%
  mutate(diff=LOW-HIGH)
v_chb_freq
# A tibble: 15 × 6
   blur      q mean_rt  HIGH   LOW  diff
   <chr> <dbl>   <dbl> <dbl> <dbl> <dbl>
 1 C       0.1    581.  575.  587. 12.1 
 2 C       0.3    631.  623.  640. 17.2 
 3 C       0.5    678.  669.  687. 18.6 
 4 C       0.7    741.  731.  751. 19.2 
 5 C       0.9    880.  868.  893. 25.1 
 6 HB      0.1    747.  731.  763. 31.5 
 7 HB      0.3    842.  821.  863. 41.8 
 8 HB      0.5    940.  915.  964. 48.7 
 9 HB      0.7   1065. 1031. 1099. 67.2 
10 HB      0.9   1302. 1253. 1352. 98.4 
11 LB      0.1    593.  588.  598.  9.92
12 LB      0.3    641.  635.  647. 12.1 
13 LB      0.5    688.  682.  694. 12.0 
14 LB      0.7    750.  744.  757. 13.6 
15 LB      0.9    896.  888.  905. 17.7 
Code
p1 <- ggplot(v_chb, aes(x = mean_rt, y = diff)) + facet_grid(~frequency) + 
  geom_abline(intercept = 0, slope = 0) +
  geom_line(size = 1, colour = "black") +
  geom_point(size = 3, colour = "black") +
  theme_bw() + 
  theme(legend.position = "none") + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
scale_y_continuous(breaks=seq(100,500,50)) +
    coord_cartesian(ylim = c(100, 500)) +
  scale_x_continuous(breaks=seq(600,1150, 100)) +
geom_label_repel(data=v_chb, aes(y=diff, label=round(diff,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5)  +
  labs( title = "Delta Plots: Freq x Blur", x = "Mean RTs per quantile", y = "High Blur - Clear Effect")

p1

Code
p2 <- ggplot(v_chb_freq, aes(x = mean_rt, y = diff)) + facet_grid(~blur) + 
  geom_line(size = 1, colour = "black") +
  geom_point(size = 3, colour = "black") +
  theme_bw() + 
  theme(legend.position = "none") + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
scale_y_continuous(breaks=seq(0,100,10)) +
    coord_cartesian(ylim = c(0, 100)) +
  scale_x_continuous(breaks=seq(600,1100, 100))+
  geom_label_repel(data=v_chb_freq, aes(y=diff, label=round(diff,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5)+
  labs( title = "Delta Plots: Freq x Blur", x = "Mean RT per quantile", y = "Frequency Effect")

p2

5.12.4 Clear vs. Low Blur

Code
v_clb <- v %>%
  dplyr::group_by(frequency,q)%>%
   mutate(mean_rt = mean(MRT)) %>% 
  ungroup() %>% 
  tidyr::pivot_wider(names_from = "blur", values_from = "MRT") %>%
  mutate(diff=LB-C)


p2 <- ggplot(v_clb, aes(x = mean_rt, y = diff)) + facet_grid(~frequency) + 
  geom_abline(intercept = 0, slope = 0) +
  geom_line(size = 1, colour = "black") +
  geom_point(size = 3, colour = "black") +
  theme_bw() + 
  theme(legend.position = "none") + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
scale_y_continuous(breaks=seq(0, 100, 20)) +
    coord_cartesian(ylim = c(0, 100)) +
  scale_x_continuous(breaks=seq(600,1100, 100))+
  geom_label_repel(data=v_clb, aes(y=diff, label=round(diff,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5) + 
  labs( title = "Clear - Low Blur", x = "Mean RT per quantile", y = "Differences")


p2

5.12.5 High Blur vs. Low Blur

Code
v_hlb <- v %>%
  dplyr::filter(blur=="HB" | blur=="LB") %>%
  dplyr::group_by(frequency,q)%>%
   mutate(mean_rt = mean(MRT)) %>% 
  ungroup() %>% 
  tidyr::pivot_wider(names_from = "blur", values_from = "MRT") %>%
  mutate(diff=HB-LB)


p3 <- ggplot(v_hlb, aes(x = mean_rt, y = diff)) + 
  facet_grid(~frequency) + 
  geom_abline(intercept = 0, slope = 0) +
  geom_line(size = 1, colour = "black") +
  geom_point(size = 3, colour = "black") +
  theme_bw() + 
  theme(legend.position = "none") + 
  theme(axis.title = element_text(size = 16, face = "bold"), 
        axis.text = element_text(size = 16),
        plot.title = element_text(face = "bold", size = 20)) +
  scale_x_continuous(breaks=seq(600,1100, 100))+
  geom_label_repel(data=v_hlb, aes(y=diff, label=round(diff,0)), color="black", min.segment.length = 0, seed = 42, box.padding = 0.5) + 
  labs( title = "High Blur - Low Blur", x = "Mean RT per quantile", y = "Group differences")


p3

5.12.6 Quantile/delta summary plot

Code
bottom <- cowplot::plot_grid(p1, p2,p3, 
                   ncol = 3, 
                   nrow = 1,
                   label_size = 14, 
                   hjust = -0.8, 
                   scale=.95,
                   align = "v")

cowplot::plot_grid(p, bottom, 
                   ncol=1, nrow=2)

Group RT distributions in the blurring and word frequency manipulations in word stimuli. Top: Each point represents the average RT quantiles (.1, .3, 0.5, 0.7,and 0.9) in each condition. Bottom: These values were obtained by computing the quantiles for each participant and subsequently averaging the obtained valuesfor each quantile over the participants

5.13 BRM: Conditionalized Memory

  • \(D\prime\)
Code
mem_sc <- read_csv("https://osf.io/eapu5/download")

head(mem_sc)
# A tibble: 6 × 11
   ...1 participant   target frequency blur  study    rt  corr sayold condition1
  <dbl> <chr>         <chr>  <chr>     <chr> <chr> <dbl> <dbl>  <dbl> <chr>     
1     1 54847f1cfdf9… AWAKE  LOW       HB    old    1.46     1      1 High Blur 
2     2 54847f1cfdf9… BOTHER HIGH      LB    old    1.45     1      1 Low Blur  
3     3 54847f1cfdf9… BOW    LOW       C     old    1.45     0      0 Clear     
4     4 54847f1cfdf9… BUCKLE LOW       C     old    1.45     1      1 Clear     
5     5 54847f1cfdf9… DEAD   HIGH      LB    old    1.46     1      1 Low Blur  
6     6 54847f1cfdf9… DIRTY  HIGH      LB    old    1.44     1      1 Low Blur  
# ℹ 1 more variable: isold <dbl>

5.13.1 Contrasts

Code
#hypothesis
blurC <-hypr(HB~C, HB~LB,levels=c("C", "HB", "LB"))
blurC
hypr object containing 2 null hypotheses:
H0.1: 0 = HB - C
H0.2: 0 = HB - LB

Call:
hypr(~HB - C, ~HB - LB, levels = c("C", "HB", "LB"))

Hypothesis matrix (transposed):
   [,1] [,2]
C  -1    0  
HB  1    1  
LB  0   -1  

Contrast matrix:
   [,1] [,2]
C  -2/3  1/3
HB  1/3  1/3
LB  1/3 -2/3
Code
HF_cont <- hypr(HIGH~LOW,levels=c("HIGH", "LOW"))
HF_cont
hypr object containing one (1) null hypothesis:
H0.1: 0 = HIGH - LOW

Call:
hypr(~HIGH - LOW, levels = c("HIGH", "LOW"))

Hypothesis matrix (transposed):
     [,1]
HIGH  1  
LOW  -1  

Contrast matrix:
     [,1]
HIGH  1/2
LOW  -1/2
Code
#set contrasts in df 
mem_sc$blur<-as.factor(mem_sc$blur)

contrasts(mem_sc$blur) <-contr.hypothesis(blurC)

mem_sc$frequency<-as.factor(mem_sc$frequency)

contrasts(mem_sc$frequency) <-contr.hypothesis(HF_cont)

5.14 BRM Model: Memory Conditionalzied

Code
prior_exp <- c(set_prior("cauchy(0,.35)", class = "b"))

fit_sc_mem <- brm(sayold ~ isold*blur*frequency + (1+isold*blur*frequency|participant) + (1+isold*blur*frequency|target), data=mem_sc, 
warmup = 1000,
                    iter = 5000,
                    chains = 4, 
                    init=0, 
                    family = bernoulli(link = "probit"),
                    cores = 4, 
control = list(adapt_delta = 0.9),
prior=prior_exp, 
sample_prior = T, 
save_pars = save_pars(all=T),
backend="cmdstanr", 
threads = threading(4))

5.14.1 Marginal Means and Differences

Code
fit_sc_mem <- read_rds("https://osf.io/wn79f/download")

fit_sc_mem_lb <- read_rds("https://osf.io/c8bqh/download")
Code
#| code-fold: show
emm_m2_d1 <- emmeans(fit_sc_mem, ~isold | blur * frequency) %>% 
  contrast("revpairwise")

emm_m2_d2 <- emmeans(fit_sc_mem, ~isold + blur * frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise"), by = "frequency")

# (Negative) criteria
emm_m2_c1 <- emmeans(fit_sc_mem, ~blur * frequency)
emm_m2_c2 <- emmeans(fit_sc_mem, ~blur | frequency) %>% 
  contrast("pairwise")
Code
tmp <- bind_rows(
  bind_rows(
    gather_emmeans_draws(emm_m2_d1) %>% 
      group_by(blur, frequency) %>% 
      select(-contrast),
    gather_emmeans_draws(emm_m2_d2) %>% 
      rename(
        blur = blur_pairwise
      ) %>% 
      group_by(blur, frequency) %>% 
      select(-isold_revpairwise)
  ),
  bind_rows(
    gather_emmeans_draws(emm_m2_c1),
    gather_emmeans_draws(emm_m2_c2) %>% 
      rename(
        blur = contrast
      )
  ),
  .id = "Parameter"
) %>% 
  ungroup() %>% 
  mutate(Parameter = factor(Parameter, labels = c("dprime", "Criterion"))) %>% 
    mutate(
    t = if_else(str_detect(blur, " - "), "Differences", "Group means") %>% 
      fct_inorder(),
    blur = fct_inorder(blur)
  ) 
  
tmp %>%   
  mutate(.value = if_else(Parameter == "Criterion", .value * -1, .value)) %>% 
  mutate(Parameter = fct_rev(Parameter)) %>% 
  ggplot(aes(blur, .value, slab_fill = frequency)) +
  labs(
    x = "Blurring Level (or difference)",
    y = "Parameter value"
  ) +
  geom_hline(yintercept = 0, linewidth = .25) +
  scale_x_continuous(
    breaks = 1:6,
    labels = unique(tmp$blur)
  ) +
  scale_slab_alpha_discrete(range = c(1, .5)) +
  stat_halfeye(
    normalize = "xy",
    width = 0.33,
    slab_color = "black",
    interval_size_range = c(0.2, 1),
    .width = c(0.66, 0.95), 
    aes(
      side = ifelse(frequency=="HIGH", "left", "right"),
      x = ifelse(frequency == "HIGH", as.numeric(blur)-.08, as.numeric(blur)+.08)
      )
  ) +
  guides(slab_alpha = "none") +
  facet_grid(Parameter~t, scales = "free")

Figure 5.1: Posterior distributions and 95%CIs of the criterion and dprime parameters, or differences therein, from Model 2.
Code
a = hypothesis(fit_sc_mem , "isold1:blur1 > 0")

b = hypothesis(fit_sc_mem , "isold1:blur2 > 0")

c = hypothesis(fit_sc_mem_lb, "isold1:blur1 = 0")

d= hypothesis(fit_sc_mem, "frequency1 > 0")

e = hypothesis(fit_sc_mem, "isold1:blur1:frequency1 > 0")

f = hypothesis(fit_sc_mem, "isold1:blur2:frequency1 > 0")

g = hypothesis(fit_sc_mem_lb, "isold1:blur1:frequency1 = 0")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis) %>%
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))


tab %>% 
   mutate(Hypothesis = c("High Blur - Clear > 0", "High Blur-Low Blur > 0", "Low Blur - Clear = 0","Low Frequency - High Frequency",  "(High Blur-Clear) - (Low Frequency-High Frequency) > 0", "(High Blur-Low Blur) - (Low Frequency-High Frequency) > 0", "(Low Blur-Clear) - (Low Frequency-High Frequency) > 0")) %>% 
  gt(caption=md("Table: Experiment 3 Memory Sensitivity D-prime")) %>%
  cols_align(
    columns=-1,
    align="right"
  )
Table: Experiment 3 Memory Sensitivity D-prime
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear > 0 0.070 0.025 0.028 0.112 409.256 0.998
High Blur-Low Blur > 0 0.087 0.027 0.044 0.131 1999.000 1.000
Low Blur - Clear = 0 -0.017 0.026 -0.068 0.034 4.434 0.816
Low Frequency - High Frequency 0.075 0.039 0.011 0.138 35.281 0.972
(High Blur-Clear) - (Low Frequency-High Frequency) > 0 0.142 0.051 0.061 0.226 483.848 0.998
(High Blur-Low Blur) - (Low Frequency-High Frequency) > 0 0.041 0.051 -0.044 0.126 3.651 0.785
(Low Blur-Clear) - (Low Frequency-High Frequency) > 0 0.099 0.052 -0.003 0.202 0.451 0.311

5.14.1.1 Frequency

Code
library(emmeans)
# Dprimes for three groups
emm_freq <- emmeans(fit_sc_mem, ~isold + frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) %>% 
  parameters::parameters(centrality = "mean")
5.14.1.1.1 Blur
Code
# Dprimes for three groups
emm_blur <- emmeans(fit_sc_mem, ~isold + blur) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) %>% 
  parameters::parameters(centrality = "mean") %>%
  flextable()
5.14.1.1.2 Blur * Frequency
Code
emm_m2_d2 <- emmeans(fit_sc_mem, ~isold + blur * frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise"), by = "frequency") %>%
    parameters::parameters(centrality = "mean")

emm_m2_d2
Parameter                |     Mean |   Mean.1 |         95% CI |     pd
------------------------------------------------------------------------
old - new, C - HB, HIGH  |    -0.14 |    -0.14 | [-0.21, -0.07] |   100%
old - new, C - LB, HIGH  |    -0.03 |    -0.03 | [-0.11,  0.04] | 81.74%
old - new, HB - LB, HIGH |     0.11 |     0.11 | [ 0.03,  0.18] | 99.78%
old - new, C - HB, LOW   | 8.93e-04 | 8.93e-04 | [-0.07,  0.07] | 51.14%
old - new, C - LB, LOW   |     0.07 |     0.07 | [-0.01,  0.14] | 95.45%
old - new, HB - LB, LOW  |     0.07 |     0.07 | [-0.01,  0.14] | 96.21%

5.14.2 Write-up

5.14.3 Recognition memory (conditionalized)

Low frequency words were better recognized than high frequency words, $\beta$ = 0.075, 95% Cr.I[0.011, 0.138], ER = 35.281. Similar to Experiments 1a and 1a, there was better memory recognition for high blurred words compared to clear words, $\beta$ = 0.07, 95% Cr.I[0.028, 0.112], ER = 409.256, and low blurred words, $b$ = 0.087, 95% Cr.I[0.044, 0.131], ER = 1999. There was no recognition memory difference between clear and low blur words, $\beta$ = -0.017, 95% Cr.I[-0.068, 0.034], ER = 4.434. There was strong evidence for an interaction between high blurred words (vs. clear) and frequency, $\beta$ = 0.142, 95% Cr.I[0.061, 0.226], ER = 483.848, with better memory for high frequency-high blurred words, $\beta$ = -0.14, 95% Cr.I[-0.21, -0.07]. There was some evidence of an interaction between blurring and frequency for high blurred words vs. low blurred words, $\beta$ = 0.041, 95% Cr.I[-0.044, 0.126], ER = 3.651. High frequency-high blurred words were better recognized than high-frequency-low blurred words, $b$ = 0.11, 95% Cr.I[0.04, 0.18]. There was also an interaction between frequency and low blurred vs. clear words, $\beta$= 0.099, 95% Cr.I[-0.003, 0.202], ER = 4.434. For low frequency words, clear words were remembered better than low blurred words.

5.15 BRM Model: Memory Unconditionalzied

Code
prior_exp <- c(set_prior("cauchy(0,.35)", class = "b"))

fit_sc_mem <- brm(sayold ~ isold*blur*frequency + (1+isold*blur*frequency|participant) + (1+isold*blur*frequency|target), data=mem_sc, 
warmup = 1000,
                    iter = 5000,
                    chains = 4, 
                    init=0, 
                    family = bernoulli(link = "probit"),
                    cores = 4, 
control = list(adapt_delta = 0.9),
prior=prior_exp, 
sample_prior = T, 
save_pars = save_pars(all=T),
backend="cmdstanr", 
threads = threading(4))

5.15.1 Marginal Means and Differences

Code
fit_sc_mem_uc <- read_rds("https://osf.io/ghv2s/download")
Code
#| code-fold: show
emm_m2_d1 <- emmeans(fit_sc_mem_uc, ~isold | blur * frequency) %>% 
  contrast("revpairwise")

emm_m2_d2 <- emmeans(fit_sc_mem_uc, ~isold + blur * frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise"), by = "frequency")

# (Negative) criteria
emm_m2_c1 <- emmeans(fit_sc_mem_uc, ~blur * frequency)
emm_m2_c2 <- emmeans(fit_sc_mem_uc, ~blur | frequency) %>% 
  contrast("pairwise")
Code
tmp <- bind_rows(
  bind_rows(
    gather_emmeans_draws(emm_m2_d1) %>% 
      group_by(blur, frequency) %>% 
      select(-contrast),
    gather_emmeans_draws(emm_m2_d2) %>% 
      rename(
        blur = blur_pairwise
      ) %>% 
      group_by(blur, frequency) %>% 
      select(-isold_revpairwise)
  ),
  bind_rows(
    gather_emmeans_draws(emm_m2_c1),
    gather_emmeans_draws(emm_m2_c2) %>% 
      rename(
        blur = contrast
      )
  ),
  .id = "Parameter"
) %>% 
  ungroup() %>% 
  mutate(Parameter = factor(Parameter, labels = c("dprime", "Criterion"))) %>% 
   mutate(
    t = if_else(str_detect(blur, " - "), "Differences", "Group means") %>% 
      fct_inorder(),
    blur = fct_inorder(blur)
  ) 
tmp %>%   
  mutate(.value = if_else(Parameter == "Criterion", .value * -1, .value)) %>% 
  mutate(Parameter = fct_rev(Parameter)) %>% 
  ggplot(aes(blur, .value, slab_fill = frequency)) +
  labs(
    x = "Blurring Level (or difference)",
    y = "Parameter value"
  ) +
  geom_hline(yintercept = 0, linewidth = .25) +
  scale_x_continuous(
    breaks = 1:6,
    labels = unique(tmp$blur)
  ) +
  scale_slab_alpha_discrete(range = c(1, .5)) +
  stat_halfeye(
    normalize = "xy",
    width = 0.44,
    slab_color = "black",
    interval_size_range = c(0.2, 1),
    .width = c(0.66, 0.95),
    aes(
      side = ifelse(frequency=="HIGH", "left", "right"),
      x = ifelse(frequency == "HIGH", as.numeric(blur)-.08, as.numeric(blur)+.08)
      )
  ) +
  guides(slab_alpha = "none") +
  facet_grid(Parameter~t, scales = "free")

Figure 5.2: Posterior distributions and 95%CIs of the criterion and dprime parameters, or differences therein, from unconditonalized model.
Code
a = hypothesis(fit_sc_mem_uc , "isold:blur1 > 0")

b = hypothesis(fit_sc_mem_uc , "isold:blur2 > 0")

c = hypothesis(fit_sc_mem_uc, "isold:blur1 > 0")

d= hypothesis(fit_sc_mem_uc, "frequency1 > 0")

e = hypothesis(fit_sc_mem_uc, "isold:blur1:frequency1 > 0")

f = hypothesis(fit_sc_mem_uc , "isold:blur2:frequency1 > 0")

g = hypothesis(fit_sc_mem_uc, "isold:blur1:frequency1 > 0")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis) %>%
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))


tab %>% 
   mutate(Hypothesis = c("High Blur - Clear > 0", "High Blur-Low Blur > 0", "Low Blur - Clear = 0","Low Frequency - High Frequency",  "(High Blur-Clear) - (Low Frequency-High Frequency) > 0", "(High Blur-Low Blur) - (Low Frequency-High Frequency) > 0", "(Low Blur-Clear) - (Low Frequency-High Frequency) > 0")) %>% 
  gt(caption=md("Table: Experiment 3 Memory Sensitivity D'")) %>% 
  cols_align(
    columns=-1,
    align="right"
  )
Table: Experiment 3 Memory Sensitivity D’
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear > 0 0.058 0.025 0.017 0.100 91.49 0.989
High Blur-Low Blur > 0 0.072 0.026 0.031 0.115 409.26 0.998
Low Blur - Clear = 0 0.058 0.025 0.017 0.100 91.49 0.989
Low Frequency - High Frequency 0.222 0.054 0.132 0.312 5332.33 1.000
(High Blur-Clear) - (Low Frequency-High Frequency) > 0 0.145 0.049 0.066 0.225 726.27 0.999
(High Blur-Low Blur) - (Low Frequency-High Frequency) > 0 0.045 0.050 -0.038 0.128 4.49 0.818
(Low Blur-Clear) - (Low Frequency-High Frequency) > 0 0.145 0.049 0.066 0.225 726.27 0.999

5.15.1.1 Frequency

Code
library(emmeans)
# Dprimes for three groups
emm_freq <- emmeans(fit_sc_mem, ~isold + frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) %>% 
  parameters::parameters(centrality = "mean")
5.15.1.1.1 Blur
Code
# Dprimes for three groups
emm_blur <- emmeans(fit_sc_mem, ~isold + blur) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) %>% 
  parameters::parameters(centrality = "mean") %>%
  flextable()
5.15.1.1.2 Blur * Frequency
Code
emm_m2_d2 <- emmeans(fit_sc_mem_uc, ~isold + blur * frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise"), by = "frequency") %>%
    parameters::parameters(centrality = "mean")

emm_m2_d2
Parameter            |  Mean | Mean.1 |         95% CI |     pd
---------------------------------------------------------------
1 - 0, C - HB, HIGH  | -0.13 |  -0.13 | [-0.20, -0.06] | 99.99%
1 - 0, C - LB, HIGH  | -0.04 |  -0.04 | [-0.11,  0.03] | 83.93%
1 - 0, HB - LB, HIGH |  0.09 |   0.09 | [ 0.03,  0.16] | 99.59%
1 - 0, C - HB, LOW   |  0.01 |   0.01 | [-0.05,  0.08] | 65.49%
1 - 0, C - LB, LOW   |  0.06 |   0.06 | [-0.01,  0.14] | 95.88%
1 - 0, HB - LB, LOW  |  0.05 |   0.05 | [-0.02,  0.12] | 91.49%

5.15.2 Write-up

5.15.2.1 Recognition Memory (Unconditionalized)

High frequency words were better recognized compared to high frequency words, $b$ = 0.222, 95% Cr.I[0.132, 0.312], ER = 5332.333. Similar to Experiments 1A and 1B, there was better memory recognition for high blurred words compared to clear words, $b$= 0.058, 95% Cr.I[0.017, 0.1], ER = 91.486, and low blurred words, $b$ = 0.072, 95% Cr.I[0.031, 0.115], ER = 409.256. There was no recognition memory difference between clear and low blur words,b = 0.058, 95% Cr.I[0.017, 0.1], ER = 91.486. There was strong evidence for an interaction between high blurred words (vs. clear) and frequency on sensitivity, $b$ = 0.145, 95% Cr.I[0.066, 0.225], ER = 726.273, with better memory for high frequency =high blurred words compared to clear words, $b$ = -0.13, 95% Cr.I[-0.2, -0.06]. There was some evidence of an interaction between blurring and frequency for high blurred words vs. low blurred words, $b$ = 0.045, 95% Cr.I[-0.038, 0.128], ER = 4.485. Specifically, high frequency-high blurred words were better recognized than high-frequency-low blurred words, $b$ = 0.09, 95% Cr.I[0.03, 0.16]. There was no interaction between frequency and low blurred vs. clear word comparison, $b$ = 0.145, 95% Cr.I[0.066, 0.225], ER = 91.486.

6 WF x Blur: LDT

Note

I observed a similar pattern for the LDT task where LF words did not show any difference between blurring level. The reason I did not include that study is because WF and Degradation do not usually interact in encoding (found this after running experiment). Showing the interaction i s crucial for testing compensatory processing account.

6.1 Marginal Means and Differences

Code
wf_mem_ldt <- read_csv("https://osf.io/cu6y9/download")

head(wf_mem_ldt)
# A tibble: 6 × 13
   ...1 participant Target freq  blur  date  study    rt  corr sayold condition1
  <dbl> <chr>       <chr>  <chr> <chr> <chr> <chr> <dbl> <dbl>  <dbl> <chr>     
1     1 2023-05-02… BELOW  Low   HB    2023… old   0.607     0      0 High Blur 
2     2 2023-05-02… BOW    Low   C     2023… old   0.656     1      1 Clear     
3     3 2023-05-02… BREAT… High  LB    2023… old   0.626     1      1 Low Blur  
4     4 2023-05-02… BUCKLE Low   C     2023… old   0.836     1      1 Clear     
5     5 2023-05-02… COW    Low   LB    2023… old   0.900     1      1 Low Blur  
6     6 2023-05-02… CROCO… Low   LB    2023… old   0.566     1      1 Low Blur  
# ℹ 2 more variables: frequency <chr>, isold <dbl>
Code
tmp <- tempdir()
download.file("https://osf.io/3avcy/download", 
              file.path(tmp, "wf_blmm_sdt_cond.rda"))
load(file.path(tmp, "wf_blmm_sdt_cond.rda"))

fit_wf_mem_ldt_lb <- read_rds("https://osf.io/yv9qd/download")
Code
#| code-fold: show
emm_m2_d1 <- emmeans(fit_wf_mem, ~isold | blur * freq) %>% 
  contrast("revpairwise")

emm_m2_d2 <- emmeans(fit_wf_mem, ~isold + blur * freq) %>% 
  contrast(interaction = c("revpairwise", "pairwise"), by = "freq")

# (Negative) criteria
emm_m2_c1 <- emmeans(fit_wf_mem, ~blur * freq)
emm_m2_c2 <- emmeans(fit_wf_mem, ~blur | freq) %>% 
  contrast("pairwise")
Code
tmp <- bind_rows(
  bind_rows(
    gather_emmeans_draws(emm_m2_d1) %>% 
      group_by(blur, freq) %>% 
      select(-contrast),
    gather_emmeans_draws(emm_m2_d2) %>% 
      rename(
        blur = blur_pairwise
      ) %>% 
      group_by(blur, freq) %>% 
      select(-isold_revpairwise)
  ),
  bind_rows(
    gather_emmeans_draws(emm_m2_c1),
    gather_emmeans_draws(emm_m2_c2) %>% 
      rename(
        blur = contrast
      )
  ),
  .id = "Parameter"
) %>% 
  ungroup() %>%
  mutate(Parameter = factor(Parameter, labels = c("dprime", "Criterion"))) %>% 
  mutate(
    t = if_else(str_detect(blur, " - "), "Differences", "Group means") %>% 
      fct_inorder(),
    blur = fct_inorder(blur)
  ) 
tmp %>%   
  mutate(.value = if_else(Parameter == "Criterion", .value * -1, .value)) %>% 
  mutate(Parameter = fct_rev(Parameter)) %>% 
  ggplot(aes(blur, .value, slab_fill = freq)) +
  labs(
    x = "Blurring Level (or difference)",
    y = "Parameter value"
  ) +
  geom_hline(yintercept = 0, linewidth = .25) +
  scale_x_continuous(
    breaks = 1:6,
    labels = unique(tmp$blur)
  ) +
  scale_slab_alpha_discrete(range = c(1, .5)) +
  stat_halfeye(
    normalize = "xy",
    width = 0.33,
    slab_color = "black",
    linewidth = 0.4,
    interval_size_range = c(0.2, 1),
    .width = c(0.66, 0.95), 
    aes(
      side = ifelse(freq == "High", "left", "right"),
      x = ifelse(freq == "High", as.numeric(blur)-.1, as.numeric(blur)+.1)
      )
  ) +
  guides(slab_alpha = "none") +
  facet_grid(Parameter~t, scales = "free")

Figure 6.1: Posterior distributions and 95%CIs of the criterion and dprime parameters, or differences therein, from unconditonalized model.
Code
a = hypothesis(fit_wf_mem , "isold:blur1 > 0")

b = hypothesis(fit_wf_mem , "isold:blur2 > 0")

c = hypothesis(fit_wf_mem_ldt_lb, "isold1:blur1 = 0")

d= hypothesis(fit_wf_mem, "freq1 > 0")

e = hypothesis(fit_wf_mem, "isold:blur1:freq1 > 0")

f = hypothesis(fit_wf_mem , "isold:blur2:freq1 > 0")

g = hypothesis(fit_wf_mem_ldt_lb, "isold1:blur1:freq1 > 0")

tab <- bind_rows(a$hypothesis, b$hypothesis, c$hypothesis, d$hypothesis, e$hypothesis, f$hypothesis, g$hypothesis) %>%
    mutate(Evid.Ratio=as.numeric(Evid.Ratio))%>%
  select(-Star)

tab[, -1] <- t(apply(tab[, -1], 1, round, digits = 3))


tab %>% 
   mutate(Hypothesis = c("High Blur - Clear > 0", "High Blur-Low Blur > 0", "Low Blur - Clear = 0","Low Frequency - High Frequency",  "(High Blur-Clear) - (Low Frequency-High Frequency) > 0", "(High Blur-Low Blur) - (Low Frequency-High Frequency) > 0", "(Low Blur-Clear) - (Low Frequency-High Frequency) > 0")) %>% 
  gt(caption=md("Table: Experiment 3 Memory Sensitivity D'")) %>% 
  cols_align(
    columns=-1,
    align="right"
  ) 
Table: Experiment 3 Memory Sensitivity D’
Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
High Blur - Clear > 0 0.060 0.023 0.023 0.097 252.968 0.996
High Blur-Low Blur > 0 0.096 0.022 0.060 0.133 Inf 1.000
Low Blur - Clear = 0 -0.037 0.022 -0.079 0.006 2.049 0.672
Low Frequency - High Frequency 0.255 0.051 0.172 0.340 Inf 1.000
(High Blur-Clear) - (Low Frequency-High Frequency) > 0 0.051 0.043 -0.019 0.121 7.611 0.884
(High Blur-Low Blur) - (Low Frequency-High Frequency) > 0 0.103 0.043 0.032 0.175 124.984 0.992
(Low Blur-Clear) - (Low Frequency-High Frequency) > 0 -0.051 0.043 -0.121 0.020 0.133 0.118

6.1.0.1 Frequency

Code
library(emmeans)
# Dprimes for three groups
emm_freq <- emmeans(fit_sc_mem, ~isold + frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) %>% 
  parameters::parameters(centrality = "mean")
6.1.0.1.1 Blur
Code
# Dprimes for three groups
emm_blur <- emmeans(fit_sc_mem, ~isold + blur) %>% 
  contrast(interaction = c("revpairwise", "pairwise")) %>% 
  parameters::parameters(centrality = "mean") %>%
  flextable()
6.1.0.1.2 Blur * Frequency
Code
emm_m2_d2 <- emmeans(fit_sc_mem_uc, ~isold + blur * frequency) %>% 
  contrast(interaction = c("revpairwise", "pairwise"), by = "frequency") %>%
    parameters::parameters(centrality = "mean")

emm_m2_d2
Parameter            |  Mean | Mean.1 |         95% CI |     pd
---------------------------------------------------------------
1 - 0, C - HB, HIGH  | -0.13 |  -0.13 | [-0.20, -0.06] | 99.99%
1 - 0, C - LB, HIGH  | -0.04 |  -0.04 | [-0.11,  0.03] | 83.93%
1 - 0, HB - LB, HIGH |  0.09 |   0.09 | [ 0.03,  0.16] | 99.59%
1 - 0, C - HB, LOW   |  0.01 |   0.01 | [-0.05,  0.08] | 65.49%
1 - 0, C - LB, LOW   |  0.06 |   0.06 | [-0.01,  0.14] | 95.88%
1 - 0, HB - LB, LOW  |  0.05 |   0.05 | [-0.02,  0.12] | 91.49%

7 Discussion

Experiment 2 explored the source of the late stage processing underlying the disfluency effect. Using a word frequency manipulation coupled with a semantic categorization task, we discovered non-additive effects of frequency and blurring on response time distributions. Specifically, the word frequency effect was magnified for high blurred words(compared to clear and low blurred words). We observed this on \(\mu\) and $b$, indicating that when stimuli are degraded, word frequency influences early and late stages of processing during word recognition. This pattern has also been found with other disfluent stimuli, such as hard-to-read handwritten cursive words (Barnhart & Goldinger, 2010; Perea et al., 2016; Vergara-Martínez et al., 2021).

Looking at quantile and delta plots, we see there was a robust word-frequency advantage that increased in the higher quantiles for clear and low-blurred words at the 0.1, 0.3, 0.5, 0.7, and 0.9 quantiles, respectively). Critically, for the high blurred condition, the word-frequency effect not only changed across quantiles with a steeper slope, but it was also larger in the first quantiles. This finding suggests that word-frequency already taps onto an encoding stage of processing when the stimuli appear in hard-to-read format like high blurred words.This replicates earlier research with easy-to-read and hard-to-read handwriting (Perea et al., 2016; Vergara-Martínez et al., 2021).

Critical here is how non-additivity observed here impacts memory. Replicating Experiments 1a and 1b, we observed an overall memory benfit for high blurred words. Additionally, we observed better recognition memory for low frequency words compared to high frequency words. Examining the interaction between blurring and frequency revealed a distinct pattern. We observed a disfluency effect for high frequency-high blurred words. However, low frequency words, regardless of blurring level, appeared to have similar memory scores (Cr.I included 0 for each comparison). This pattern of findings helps us shed some light on the potential source of late stage processing in the disfluency effect.

The compensatory processing account (Mulligan, 1996) posits that memory performance depends on the depth of stimulus encoding, with items undergoing the most top-down processing yielding the biggest memory benefit. Contrary to this, we did not observe superior memory for low frequency-high blurred words. In fact, low frequency words did not seem to show a strong disfluency effect for any of the comparisons. Interestingly, there seemed to be a reversal in the effect, with clear words having higher sensitivity than low blurred words. Despite this, we did observe a disfluency effect for high frequency words. The theoretical implications of this is discussed in the General Discussion.

Andrews, S., & Heathcote, A. (2001). Distinguishing common and task-specific processes in word identification: A matter of some moment? Journal of Experimental Psychology: Learning, Memory, and Cognition, 27(2), 514–544. https://doi.org/10.1037/0278-7393.27.2.514
Balota, D. A., & Spieler, D. H. (1999). Word frequency, repetition, and lexicality effects in word recognition tasks: Beyond measures of central tendency. Journal of Experimental Psychology: General, 128(1), 32–55. https://doi.org/10.1037/0096-3445.128.1.32
Barnhart, A. S., & Goldinger, S. D. (2010). Interpreting chicken-scratch: Lexical access for handwritten words. Journal of Experimental Psychology: Human Perception and Performance, 36(4), 906–923. https://doi.org/10.1037/a0019258
Diana, R. A., & Reder, L. M. (2006). The low-frequency encoding disadvantage: Word frequency affects processing demands. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(4), 805–815. https://doi.org/10.1037/0278-7393.32.4.805
Fernández-López, M., Gómez, P., & Perea, M. (2022). Letter rotations: through the magnifying glass and What evidence found there. Language, Cognition and Neuroscience, 38(2), 127–138. https://doi.org/10.1080/23273798.2022.2093390
Geller, J., & Peterson, D. (2021). Is this going to be on the test? Test expectancy moderates the disfluency effect with sans forgetica. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(12), 1924–1938. https://doi.org/10.1037/xlm0001042
Glanzer, M., & Adams, J. K. (1985). The mirror effect in recognition memory. Memory & Cognition, 13(1), 8–20. https://doi.org/10.3758/bf03198438
Gomez, P., & Perea, M. (2014). Decomposing encoding and decisional components in visual-word recognition: A diffusion model analysis. Quarterly Journal of Experimental Psychology, 67(12), 2455–2466. https://doi.org/10.1080/17470218.2014.937447
Kuchinke, L., Vo, M., Hofmann, M., & Jacobs, A. (2007). Pupillary responses during lexical decisions vary with word frequency but not emotional valence. International Journal of Psychophysiology, 65(2), 132–140. https://doi.org/10.1016/j.ijpsycho.2007.04.004
Malmberg, K. J., & Nelson, T. O. (2003). The word frequency effect for recognition memory and the elevated-attention hypothesis. Memory & Cognition, 31(1), 35–43. https://doi.org/10.3758/bf03196080
Mulligan, N. W. (1996). The effects of perceptual interference at encoding on implicit memory, explicit memory, and memory for source. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22(5), 1067–1087. https://doi.org/10.1037/0278-7393.22.5.1067
Pazzaglia, A. M., Staub, A., & Rotello, C. M. (2014). Encoding time and the mirror effect in recognition memory: Evidence from eyetracking. Journal of Memory and Language, 75, 77–92. https://doi.org/10.1016/j.jml.2014.05.009
Perea, M., Fernández-López, M., & Marcet, A. (2018). Does CaSe-MiXinG disrupt the access to lexico-semantic information? Psychological Research, 84(4), 981–989. https://doi.org/10.1007/s00426-018-1111-7
Perea, M., Gil-López, C., Beléndez, V., & Carreiras, M. (2016). Do handwritten words magnify lexical effects in visual word recognition? Quarterly Journal of Experimental Psychology, 69(8), 1631–1647. https://doi.org/10.1080/17470218.2015.1091016
Plourde, C. E., & Besner, D. (1997). On the locus of the word frequency effect in visual word recognition. Canadian Journal of Experimental Psychology / Revue Canadienne de Psychologie Expérimentale, 51(3), 181–194. https://doi.org/10.1037/1196-1961.51.3.181
Ptok, M. J., Hannah, K. E., & Watter, S. (2020). Memory effects of conflict and cognitive control are processing stage-specific: evidence from pupillometry. Psychological Research, 85(3), 1029–1046. https://doi.org/10.1007/s00426-020-01295-3
Ptok, M. J., Thomson, S. J., Humphreys, K. R., & Watter, S. (2019). Congruency encoding effects on recognition memory: A stage-specific account of desirable difficulty. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.00858
Staub, A. (2010). The effect of lexical predictability on distributions of eye fixation durations. Psychonomic Bulletin & Review, 18(2), 371–376. https://doi.org/10.3758/s13423-010-0046-9
Sternberg, S. (1969). The discovery of processing stages: Extensions of Donders’ method. Acta Psychologica, 30, 276–315. https://doi.org/10.1016/0001-6918(69)90055-9
Vergara-Martínez, M., Gutierrez-Sigut, E., Perea, M., Gil-López, C., & Carreiras, M. (2021). The time course of processing handwritten words: An ERP investigation. Neuropsychologia, 159, 107924. https://doi.org/10.1016/j.neuropsychologia.2021.107924
Westerman, D. L., & Greene, R. L. (1997). The effects of visual masking on recognition: Similarities to the generation effect. Journal of Memory and Language, 37(4), 584–596. https://doi.org/10.1006/jmla.1997.2531
Yap, M. J., & Balota, D. A. (2007). Additive and interactive effects on response time distributions in visual word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(2), 274–296. https://doi.org/10.1037/0278-7393.33.2.274