---
title: "Bringing Sexy (Webcam Eye-tracking) Back into the Lab: Stage 1 Registered Report"
# If blank, the running header is the title in
running-head: "Webcam eye-tracking in lab"
# Set names and affiliations.
# It is nice to specify everyone's orcid, if possible.
# There can be only one corresponding author, but declaring one is optional.
author:
- name: Jason Geller
corresponding: true
orcid: 0000-0002-7459-4505
email: drjasongeller@gmail.com
# Roles are optional.
# Select from the CRediT: Contributor Roles Taxonomy https://credit.niso.org/
# conceptualization, data curation, formal Analysis, funding acquisition, investigation,
# methodology, project administration, resources, software, supervision, validation,
# visualization, writing, editing
roles:
- Conceptualization
- Writing
- Data curation
- Editing
- Formal analysis
affiliations:
- id: 1
name: "Boston College"
department: Department of Psychology and Neuroscience
address: Mcguinn Hall 405
city: Chestnut Hill
region: MA
country: USA
postal-code: 02467-9991
- name: João Veríssimo
orcid: 0000-0002-1264-3017
roles:
- Editing
- Writing
- Formal analysis
affiliations:
- id: 2
name: "University of Lisbon"
department: School of Arts and Humanities
- name: Julia Droulin
orcid: 0000-0002-9689-4189
roles:
- Editing
- Validation
- Formal analysis
affiliations:
- id: 3
name: "Univetsity of North Carolina - Chapel Hill"
department: Speech and Hearing Divison
author-note:
status-changes:
affiliation-change: ~
deceased: ~
disclosures:
# Example: This study was registered at X (Identifier Y).
# Acknowledge and cite data/materials to be shared.
data-sharing: Data, code, and materials for this manuscript can be found at https://osf.io/6sy7k/.
related-report: ~
conflict-of-interest: The authors have no conflicts of interest to disclose.
authorship-agreements: ~
abstract: "Webcam-based eye-tracking offers a scalable and accessible alternative to traditional lab-based systems. While recent studies demonstrate that webcam eye-tracking can replicate canonical effects across domains such as language, memory, and decision-making, questions remain about its precision and reliability. In particular, spatial accuracy, temporal resolution, and attrition rates are often poorer than those observed with research-grade systems, raising the possibility that environmental and hardware factors introduce substantial noise. The present registered report directly tests this hypothesis by bringing webcam eye-tracking back into a controlled laboratory setting. In Experiment 1, we examine the effect of webcam quality (high vs. standard) in a single word Visual World Paradigm (VWP) task, testing whether higher-quality webcams yield stronger competition effects, earlier effect onsets, and reduced attrition. In Experiment 2, we assess the impact of head stabilization (chinrest vs. no chinrest) under identical environmental conditions. Together, these studies isolate the causal influence of hardware and movement on webcam eye-tracking data quality. Results will inform a more methodological understanding of webcam-based eye-tracking, clarifying whether its current limitations are intrinsic to the technology or can be mitigated through improved hardware and experimental control."
keywords: [webcam eye-tracking, webcameras, VWP, Lab-based experimentation, competition, spoken word recognition]
authornote: |
Created with Quarto {{< version >}} and *preprint-typst*
floatsintext: true
numbered-lines: true
bibliography: "references.bib"
suppress-title-page: false
link-citations: false
mask: false
masked-citations:
draft-date: false
lang: en
language:
citation-last-author-separator: "and"
citation-masked-author: "Masked Citation"
citation-masked-date: "n.d."
citation-masked-title: "Masked Title"
email: "drjasongeller@gmail.com"
title-block-author-note: "Author Note"
title-block-correspondence-note: "Correspondence concerning this article should be addressed to"
title-block-role-introduction: "Author roles were classified using the Contributor Role Taxonomy (CRediT; https://credit.niso.org/) as follows:"
csl: https://www.zotero.org/styles/apa
execute:
echo: true
eval: false
warning: false
message: false
fig-align: "center"
tbl-align: "center"
keep-with-next: true
code-overflow: wrap
cache: true
ft-align: "center"
out-width: 50%
fig-dpi: 500
knitr:
opts_chunk:
dev: "ragg_png"
---
Online experimentation in the behavioral sciences has advanced considerably since its introduction at the 1996 Society for Computers in Psychology (SCiP) conference in Chicago, IL [@reips2021]. One methodological domain that has shown particular promise in moving online is eye tracking. Traditionally, eye-tracking studies required controlled laboratory settings equipped with specialized and costly hardware—a process that is both resource- and time-intensive. More recently, however, a growing body of research has shown that eye tracking can be successfully adapted to online environments [e.g., @bogdan2024; @bramlett2024; @özsoy2023; @prystauka2024; @slim2023; @slim2024; @vandercruyssen2023; @vos2022; @jamesWhatParadigmsCan2025; @yangWebcambasedOnlineEyetracking2021]. By leveraging standard webcameras, researchers can now record eye movements remotely, making it possible to collect data from virtually any location at any time. This shift not only enhances scalability, but also broadens access to more diverse and representative participant samples.
Webcam-based eye tracking has become an increasingly viable and accessible method for behavioral research. Implementation typically requires only a standard computing device (e.g., laptop, desktop, tablet, or smartphone) equipped with a built-in or external webcam. Data are collected through a web browser running dedicated software capable of recording and estimating gaze position in real time. This accessibility has been further enhanced by the integration of webcam-based eye tracking into several established experimental platforms, including Gorilla [@Anwyl-Irvine2020], PsychoPy/PsychoJS [@Peirce2019], jsPsych [@de2015], PCIbex [@zehr2022], and Labvanced [@Kaduk2024].
To reliably estimate where users are looking, webcam-based eye tracking typically relies on appearance-based methods, which infer gaze direction directly from visual features of the eye region (e.g., pupil and iris appearance) [@chengAppearancebasedGazeEstimation2024; @saxenaDeepLearningModels2024]. Recent work has extended these methods using deep learning to learn gaze–appearance mappings directly from data [e.g., @Kaduk2024; @saxenaDeepLearningModels2024]. This contrasts with research-grade eye trackers, which use model-based algorithms combining infrared illumination with geometric modeling of the pupil and corneal reflections [@chengAppearancebasedGazeEstimation2024].
The most widely used library for webcam eye tracking is WebGazer.js [@papoutsakiWebGazerScalableWebcam2016a; @patterson2025]. WebGazer.js is an open-source JavaScript library that performs real-time gaze estimation using standard webcams. It is an appearance-based method that leverages computer vision techniques to detect the face and eyes, extract image features, and map these features onto known screen coordinates during a brief calibration procedure. Once trained, gaze locations on the screen are estimated via ridge regression [@papoutsakiWebGazerScalableWebcam2016a].
Although webcam eye-tracking is still relatively new, validation efforts are steadily accumulating and the results are encouraging. Researchers have successfully applied webcam-based methods to domains such as language [e.g., @bramlettArtWranglingWorking2025; @prystauka2024; @gellerLanguageBordersStepbystep2025f], judgment and decision-making [e.g., @yangWebcambasedOnlineEyetracking2021], and memory [e.g., @jamesWhatParadigmsCan2025] Overall, these studies replicate canonical effects and show strong convergence with findings from traditional lab-based eye-tracking systems.
However, there are several limitations associated with web-based eye tracking. First, effect sizes are typically smaller than those observed in lab-based studies [@bogdan2024; @degen2021; @kandel2024; @slim2024; @slim2023; @vandercruyssen2023], which generally necessitates larger sample sizes to achieve comparable statistical power. Second, relative to research-grade eye trackers, both spatial accuracy/precision and temporal resolution tend to be lower. Spatial accuracy refers to the extent to which measured gaze positions deviate from the true gaze point, whereas precision reflects the consistency of those measurements over time [@carter2020]. In webcam-based eye tracking, spatial accuracy and precision often exceed 1° of visual angle [@semmelmann2018]. Regarding temporal resolution, sampling rates are typically more variable, with most webcameras rarely exceeding 30 Hz . Consequently, detectable effects tend to span a relatively broad temporal range of approximately 50–1000 ms [@semmelmann2018; @slim2024; @slim2023; @gellerLanguageBordersStepbystep2025f]. These constraints make webcam eye tracking less suitable for studies that require fine-grained spatial or temporal fidelity—for example, paradigms involving many or small areas of interest (AOIs) [@jamesWhatParadigmsCan2025] or tasks requiring millisecond-level temporal precision [@slim2024]. Lastly, webcam-based studies tend to exhibit higher attrition rates. For instance, @patterson2025 reported an average attrition rate of approximately 13% across studies, with substantial variability across individual experiments [see also @gellerLanguageBordersStepbystep2025f; @prystauka2024].
An open question is whether the limitations of web-based eye-tracking primarily stem from the WebGazer.js algorithm itself or from environmental and hardware constraints—and, crucially, whether future improvements can mitigate these issues. On the algorithmic side, recent work [e.g., @jamesWhatParadigmsCan2025; also see @yangWebcambasedOnlineEyetracking2021] demonstrated that modifying WebGazer.js so that the sampling rate is polled consistently and timestamps are aligned to data acquisition (rather than completion) markedly improves temporal resolution. Implementing these changes within online experiment platforms such as Gorilla and jsPsych has brought webcam-based eye-tracking closer to the timing fidelity achieved in laboratory settings. For example, using the Gorilla platform, @prystauka2024 reported a 50 ms timing difference, while @gellerLanguageBordersStepbystep2025f observed a 100 ms difference between lab-based and online effects.
To our knowledge, no study has directly tested how environmental and hardware constraints impact webcam-based eye-tracking data. @slim2023 provided some evidence suggesting that hardware quality may underlie some of these limitations, reporting a positive correlation between webcam sample rate and calibration accuracy. Similarly, @gellerLanguageBordersStepbystep2025f found that participants who failed calibration more often reported using standard-quality built-in webcams and working in suboptimal environments (e.g., natural lighting). Together, these findings suggest that both hardware and environmental factors may contribute to the increased noise commonly observed in online eye-tracking data.
## Proposed Research
To address environmental and technical sources of noise in webcam eye-tracking, we plan to bring participants into the lab to complete a Gorilla-hosted webcam task under standardized conditions. We plan to manipulate two factors across two experiments. Experiment 1 will vary webcam quality (high- vs. standard-quality external cameras). Experiment 2 varies head stabilization (with vs. without a chinrest). All sessions will be conducted under standardized conditions: identical ambient lighting, fixed viewing distance, the same display/computer model, and controlled network settings. This design allows us to isolate the causal effects of hardware and movement on data quality. Our key questions are whether higher-quality webcams and reduced head movement decrease noise, thereby (a) increasing effect sizes (higher proportion of looks), (b) yielding earlier onsets of established effects, and (c) reducing calibration failures/attrition rate. As noted above
To examine these factors, we replicate a paradigm widely used in psycholinguistics—the Visual World Paradigm (VWP) [@cooper1974; @tanenhaus1995]. The VWP has been successfully adapted for webcam-based eye-tracking [@bramlett2024; @bramlettArtWranglingWorking2025; @gellerLanguageBordersStepbystep2025f; @prystauka2024]. While there are variations in implementation [see @huettig2011], in the version most relevant to the present study, each trial presents four images positioned in the four screen quadrants while a spoken word is played. Participants then select the picture that matches the utterance.
In paradigms of this kind, item sets are typically designed so that the display contains a target (e.g., carrot), a cohort competitor (e.g., carriage), a rhyme competitor (e.g., parrot), and an unrelated distractor (e.g., tadpole). This configuration allows researchers to examine the dynamics of lexical competition—for example, how phonologically similar words like *carriage* (cohort effect) or *parrot* (the rhyme effect) affect online speech processing. Notably, such competition effects have also been replicated in webcam-based VWP studies [e.g., @gellerLanguageBordersStepbystep2025f; @slim2024], highlighting this paradigm as a particularly strong test case for the present investigation.
# Experiment 1: High Quality Webcam vs. Standard-quality Webcam
Both @slim2023 and @gellerLanguageBordersStepbystep2025f observed a clear relationship between webcam quality and calibration accuracy in webcam-based eye-tracking. Building on these findings, Experiment 1 tests how webcam quality influences competition effects in a single-word VWP. Specifically, we ask whether a higher-quality webcam yields (a) a greater proportion of looks to relevant interest areas (i.e., stronger detectability of competition), (b) an earlier emergence of these effects over time, and (c) lower data attrition rates relative to a lower-quality webcam.
To address this, participants will complete the same VWP task using one of two webcam types: a high-quality external webcam (e.g., Logitech Brio) and a standard external webcam designed to emulate a typical built-in laptop camera (e.g., Logitech C270). The high-quality webcam offers higher resolution, a higher sampling rate, and greater frame-rate stability, and more consistent illumination handling—factors expected to enhance gaze precision and tracking reliability. In contrast, more standard webcams, while representative of most participants’ home setups, typically provide lower frame rates and exhibit greater variability under different lighting conditions. Comparing these two setups enables a direct assessment of how hardware quality constrains the strength, timing, and reliability of linguistic competition effects in webcam-based eye-tracking.
## Hypotheses
We hypothesize several effects related to competition, onset, and attrition.
### Competition Effects
(H1a) Participants will show a competition effect, with more looks directed toward cohort competitors than unrelated distractors. (H1b) Webcam quality (high vs. standard) will influence the overall proportion of looks, with higher-quality webcams detecting a greater number of looks. (H1c) There will be an interaction between webcam quality and competition, such that the magnitude of the competition effect will be larger in the high-quality webcam condition than in the standard-quality condition.
### Onset Effects
(H2a) Looks to cohort competitors will emerge earlier than looks to unrelated distractors. (H2b) The onset of looks will occur earlier in the high-quality webcam condition than in the standard-quality condition. (H2c) Consequently, the competition effect will emerge sooner in the high-quality webcam condition compared to the standard-quality condition.
### Attrition
(H3) Attrition rates will be lower in the high-quality webcam condition than in the standard-quality webcam condition.
# Method
All stimuli (audio and images), code, and data (raw and summary) will be placed on OSF at this link: <https://osf.io/cf6xr/overview>. The entire experiment will be stored on Gorilla's open materials with a link to preview the tasks. In addition, the code and manuscript will be fully reproducible using Quarto and the package manager nix [@nix] in combination with the R package {rix} [@rix] . Together, nix and {rix} enable reproducible computational environments at both the system and package levels. This manuscript and all of the necessary files to reproduce it can be found on GitHub: <https://github.com/jgeller112/Webcam2Lab-VWP>.
## Sampling Goal
We conducted an a priori power analysis via Monte Carlo simulation in R. Data from 21 participants, collected online using the Gorilla experimental platform during the development of the webgazeR package and employing the same stimuli and VVWP design, were used to seed the simulations. In these data, we observed a cohort effect of approximately 3%. Using this value as our seed, we collapsed the data across time bins to compute binomial counts per trial and fit a binomial generalized linear mixed model (GLMM) to obtain fixed-effect estimates. We then augmented the dataset by adding a between-subjects factor for webcam quality, with participants evenly assigned to high- and standard-quality groups. In the high-quality webcam group, we modeled both a higher overall fixation rate and a larger cohort effect, whereas in the standard-quality group the cohort effect was halved relative to the high-quality group. Simulated datasets were generated under this model, and the planned GLMM—including a condition × webcam interaction—was refit to each simulated dataset (*N*= 5000). Power was estimated as the proportion of simulations in which the interaction term exceeded \|z\| = 1.96. The analysis script to run this power analysis is located here: <https://osf.io/4trmn/files/a46g8>. Results indicated that a total of 35 participants per group (*N* = 70) would provide approximately 90% power to detect the hypothesized reduction in the cohort effect and overall fixation rate under standard-quality webcam conditions. We will therefore recruit participants until we have 35 in each group (*N* = 70 total). We will run our study until we have 70 usable participants (35 in each group). For the calibration analysis (see below), all participants who enter the study will be included.
## Materials
### VWP
#### Picture Stimuli
Stimuli were adapted from @colby2023 . Each set comprised four images: a target, an onset (cohort) competitor, a rhyme competitor, and an unrelated item (e.g., rocket, rocker, pocket, bubble). For the webcam study, we used 30 sets (15 monosyllabic, 15 bisyllabic).
Within each set, only the target and its onset competitor served as auditory targets once each, yielding two trial types: TCRU (target–cohort–rhyme–unrelated) and TCUU (target–cohort–unrelated–unrelated). This resulted in 60 trials total (30 sets × 2 targets per set). A MATLAB script generated a unique randomized list per participant, pseudo-randomizing display positions so that each image type was approximately equally likely to appear in any quadrant across subjects.
All 120 images were from a commercial clipart database that were selected by a small focus group of students and edited to have a cohesive style using a standard lab protocol [@mcmurray2010]. Images were all scaled to 300 × 300 pixels.
#### Auditory Stimuli
Auditory stimuli were recorded by a female monolingual speaker of English in a sound-attenuated room sampled at 44.1 kHz. Auditory tokens were edited to reduce noise and remove clicks. They were then amplitude normalized to 70 dB SPL. . All .wav files were converted to .mp3 for online data collection.
#### Webcams
To manipulate recording quality, two webcams will be used. In the high-quality condition we will use a Logitech Brio webcam that records in 4K resolution (up to 4096 × 2160 px) with a 90° field of view, providing high-fidelity video. In the standard-quality condition we will use a Logitech C270 HD webcam will record in 720 p resolution, producing video comparable to that of a typical laptop webcam, thereby simulating lower-quality online recordings.
Both webcams will be mounted in a fixed position above the monitor to maintain consistent framing across participants. Lighting will be standardized to ensure uniform image quality across all sessions.
## Experimental Setup and Procedure
All tasks will be completed in a single session lasting approximately 30 minutes. The experiment will be programmed and administered in Gorilla [@Anwyl-Irvine2020]. Participants will be brought into a room in the Human Neuroscience Lab at Boston College and seated in front of a 23-inch Dell U2312HM monitor (1920 × 1080 px) approximately 65 cm from the screen. Audiotry information will be presented over Sony ZX110 headphones to ensure consistent audio presentation and minimize background noise. The experimental tasks will be fixed and presented in this order: informed consent, single word VWP, and a demographic questionnaire. The entire experiment can be viewed on Gorilla at this link:.
Before the main task, an instructional video will demonstrate the calibration procedure. Calibration will occur twice—once at the start and again after 30 trials—with up to three attempts allowed each time. In each calibration phase, participants will view nine calibration targets and five validation points, looking directly at each target as instructed. Participants will then complete four practice trials to familiarize themselves with the task. Each trial begins with a 500 ms central fixation cross, followed by a preview display of four images located in the screen’s corners. After 1500 ms, a start button appears at the center; participants click it to confirm fixation before hearing the spoken word. The images remain visible throughout the trial, and participants indicate their response by clicking the image corresponding to the spoken target. A response deadline of 5 seconds will be used. Eye movements are recorded continuously during each . Following the main VWP task, participants will complete a brief demographic questionnaire, after which they will be thanked for their participation.
## Data Preprocessing and Exclusions
We will follow guidelines outlined in @gellerLanguageBordersStepbystep2025f. At the participant level, individuals with overall task accuracy below 80% will be excluded. At the trial level, only correct-response trials (accuracy = 1) will be retained. Reaction times (RTs) outside ±2.5 SD of the participant-level distribution (computed within condition) will be discarded.
For eye-tracking preprocessing we will use the {webgazeR} package in R that contains helper functions to preprocess webcam eye-tracking data. All webcam eye-tracking files and behavioral data will be merged. Data quality will be screened via sampling-rate checks with very low-frequency recordings (e.g., \< 5 Hz) by-participant and by-trial excluded [@bramlettArtWranglingWorking2025; @vos2022]. We will quantify out-of-bounds (OOB) samples—gaze points outside the normalized screen (1,1)—and remove participants and trials with excessive OOB data ( \> 30%). OOB samples will be discarded prior to analysis. In addition, Gorilla provides calibration/quality metrics (“convergence” and “confidence,” both 0–1); trials with convergence \< 0.5 or confidence \> 0.5 will be excluded.
Areas of Interest (AOIs) will be defined in normalized coordinates as the four screen quadrants, and gaze samples will be assigned to AOIs. To create a uniform time base, data will be resampled into 100-ms bins. Trial time will be aligned to the actual stimulus onset by taking the audio onset metric provided by Gorilla. We then subtract 200 ms to approximate saccade programming and execution latency [@viviani1990], and an additional 100 ms due to silence prefixed to the audio recording.
For analysis, within each participant × trial × time bin we will compute, for each AOI, the number of valid gaze samples in that AOI (“successes”) and the total number of valid samples in the bin (“trials”). These binomial counts (or their proportions) will serve as inputs to the statistical models and summaries; subject- and condition-level aggregates will be obtained by averaging across trials for descriptive plots.
## Analysis Plan
### Competition and Onset Effects
To analyze overall competition effects and onset latency, we will use generalized additive mixed models (GAMMs; Wood, 2017). GAMMs extends the generalized linear modeling framework by modeling effects that are expected to vary nonlinearly over time–a common feature in the VWP [@brown-schmidt2025; @mitterer2025; @verissimo2025novel; @ito2022]. These models capture nonlinear effects by fitting smoothing splines—or “wiggles”—to the data using data-driven, machine-learning-based methods. This approach reduces the risk of over-fitting and eliminates the need to use polynomial terms, as required in traditional growth curve models [@mirman]. Importantly, GAMMs also allow researchers to account for autocorrelation in time-series data, which is especially critical in gaze analyses where successive samples are not independent. By modeling the autocorrelation structure, GAMMs provide more accurate estimates of temporal effects and prevent inflation of Type I error rates. In addition to this, fitting GAMMs allow us to estimate the onset of the competition effect in each condition [see @verissimo2025novel].
Gaze samples will be analyzed with a binomial (logistic) GAMM using the `bam()` function from the {**mgcv}** package [@Wood2017]. For visualization, we will employ functions from the {tidygam} package [@tidygam], and use the **{onsets}** package [@verissimo2025novel] to examine onset latencies. The dependent variable consisted of gaze counts to cohort and unrelated pictures, for each participant and in each 100 ms time bin. All analyses were conducted on a window ranging from -100 ms from target onset to 1200 ms.
We will fit a model including parametric terms for webcam type (effects-coded: high = 0.5, standard = –0.5), item type (effects-coded: cohort = 0.5, unrelated = –0.5), and their interaction, capturing the overall (time-independent) effects. To examine how webcam type moderates the cohort effect over time, these two factors will also be combined into a single four-level factor. [^1]Nonlinear, time-dependent effects will be modeled using factor smooths for time and for time-by-condition interactions, with condition treated as a categorical variable. To account for individual differences, we include random smooths by participant and random smooths for time by participant for each level of condition. This model specification allows the model to capture (a) overall differences in fixation proportions between conditions (via the parametric terms), (b) dynamic, time-varying trajectories unique to each condition (via the smooth terms), and (c) participant-specific deviations from these group-level patterns (via the random smooths). While it is common to specify maximal models with random effects models [@barr2013], these can be costly when fitting GAMMs [@verissimo2025novel]. The current model specifications follows @verissimo2025novel. All effects will be judged as statistically significant if the *p* value is \< .05.
[^1]: GAMs are inherently additive, meaning that interactions between nonlinear (smooth) terms cannot be estimated directly. To evaluate time-varying interactions or simple effects, it is therefore standard practice to combine relevant factors into a single composite factor and fit condition-specific smooths [@coretta2024].
::: callout-note
## Debating Bayesian models
:::
To account for autocorrelation in the residuals, we will first fit the model without an autoregressive term in order to estimate the autocorrelation parameter (ρ). We will then re-fit the model including a first-order autoregressive process (AR(1)) to properly model temporal dependencies. Although using larger time bins can reduce autocorrelation, it does not eliminate it entirely, so explicitly modeling residual autocorrelation ensures valid statistical inference. Template code to fit the models is included below.
```{r}
#| eval: false
# quick rho estimate (fit once without AR to get residual ACF ~ lag1)
# combine levels of both factors into one factor
dat$cond4 <- interaction(dat$condition, dat$webcam, drop = TRUE)
m0 <- bam(cbind(fix, fail) ~ 1 + cond_c*cam_c +
s(time, k = 10) +
s(time, by = cond4 k = 10) + # condition-specific curves interaction
s(participant, bs = "re") + # random intercepts
s(time, participant, by=cond4, bs = "re"), # random fucntional smooths for time/subject by cond
family = binomial(), method = "fREML",
discrete = TRUE, data = dat, na.action = na.omit, select = TRUE)
rho <- acf(residuals(m0, type = "pearson"), plot = FALSE)$acf[2]
# final model with AR(1) to handle within-series autocorrelation
m1 <- bam(cbind(fix, fail) ~ 1 + cond_c*cam_c +
s(time, k = 10) +
s(time, by = cond4, k = 10) + # condition-specific curves interaction
s(participant, bs = "re") + # random intercepts
s(time, participant, by=cond4, bs = "re"), # subject smooths
family = binomial(), method = "fREML",
discrete = TRUE, data = dat, na.action = na.omit, select = TRUE)
summary(m1)
```
```{r}
#| eval: false
#|
# Obain onsets in each condition (and their differences)
onsets_comp <- get_onsets(model = m1, # Fitted GAMM
time_var = "time", # Name of time variabl
by_var = "conditionfactor", # Name of condition/group variable
difference = T, # Obtain differences between onsets
n_samples = 10000, # Large number of samples (less variable results)
seed = 1, # Random seed for reproducibility
silent = T)) # Cleaner output in documentation
```
## Calibration
To examine whether webcam affects calibration rejection, we will fit a logistic regression model using the glm function and the code blow.
```{r}
glm(calibration ~ webcam, family = binomial(link = "logit"))
```
# Experiment 2: Head stabilization (chin rest) vs. no head stabilization
In Experiment 2, we will use the same standard-quality webcam as in Experiment 1, but will manipulate head stability by comparing a chin-rest condition to a no–chin-rest condition. Some online platforms [e.g., Labvanced; @Finger2017LabVanced] mitigate head motion by warning participants when they move outside a predefined region; however, it remains unclear how such motion control interacts with WebGazer.js estimates of event detection and onset latency. We therefore test the following hypotheses regarding competition, onset, and attrition.
## Hypotheses
We hypothesize several effects concerning competition, onset, and attrition. Participants are expected to show a competition effect, with more looks directed toward cohort competitors than to unrelated distractors. The use of a chin rest is predicted to influence the overall proportion of looks. The competition effect is predicted to be larger when participants use a chin rest than when they do not. We also expect looks to cohort competitors to emerge earlier than looks to unrelated distractors, with overall gaze onsets occurring sooner in the chin-rest condition. Finally, we anticipate that attrition rates will be lower in the chin-rest condition compared to the no–chin-rest condition
## Sampling goal, materials , procedure
The sampling goal, materials, and procedure are the same as Experiment 1. The difference is whether participants use a chin rest or not.