Clear Communications is essential when you Retract a Story, a Finding…An ERROR!

Clear Communications is essential when you Retract a Story, a Finding…An ERROR!

Psychology researcher explains how retraction-causing errors led to change in her lab

Last month, we brought you the story of two retractions by Yale’s Laurie Santos because the team discovered errors in the way the first author had coded the data. That first author, Neha Mahajan, took full responsibility for the coding problems, according to the retraction notices, and a university investigation cleared her of any “intentional, knowing, reckless, or grossly negligent action.”

But a few of our readers noted that the papers refer to a second coder on some of the experiments, and have questioned whether that’s compatible with Mahajan being solely responsible for the errors.

We asked Santos earlier this week to explain the apparent discrepancy, which she did along with a description of how her lab has made changes to prevent such errors in the future:

Both retracted papers followed what (until recently) was the typical procedure for coding looking time studies in my lab: we had one experimenter (in this case, Neha) code the duration of subjects’ looking for all the sessions and then used this first experimenter’s measurements in all the analyses we report in the paper. We then had a second experimenter code only a small subset of the sessions (JPSP paper: 10 of the sessions of Experiment 1, Developmental Science paper: 6 of the sessions of Experiment 1) to establish reliability with the first coder. Basically, we check to be sure that a second coder would get qualitatively similar results to the first coder on a random subset of the sessions by performing a correlation on the two coder’s measurements. In both of the now retracted studies, these reliability correlation scores were a bit lower than we had observed in previous published studies using the same techniques (r = 0.75 in JPSP, r = 0.85 in Developmental Science), but at the time we didn’t consider them low enough to raise any red flags (and neither did the reviewers) so we used the first coder’s measurements alone in the original analyses. Unfortunately, only double-coding a subset of the data wasn’t enough to spot the problem. We’ve now changed our lab’s coding procedures to prevent this in the future– all our studies will now have a second coder recode all sessions, so that every datapoint can be double-checked. If we had used this full dataset double-coding procedure before, we probably would have detected the problems in the retracted papers.

The second of the two retractions, which was not yet live when we published our first post, has been published in the Journal of Personality and Social Psychology. (The full version, available behind a paywall, is the same as we reported it would be, but the abstracted version is missing the last few lines.)

 

Comments are closed.

Live Chat With us Now
%d bloggers like this: