How Many Replicators Does It Take to Achieve Reliability: Investigating Researcher Variability in a Crowdsourced Replication

Research output: Book/ReportReportResearch

Standard

How Many Replicators Does It Take to Achieve Reliability : Investigating Researcher Variability in a Crowdsourced Replication. / Merhout, Friedolin; Breznau, Nate.

2021. 37 p.

Research output: Book/ReportReportResearch

Harvard

Merhout, F & Breznau, N 2021, How Many Replicators Does It Take to Achieve Reliability: Investigating Researcher Variability in a Crowdsourced Replication. <https://osf.io/preprints/socarxiv/j7qta/>

APA

Merhout, F., & Breznau, N. (2021). How Many Replicators Does It Take to Achieve Reliability: Investigating Researcher Variability in a Crowdsourced Replication. https://osf.io/preprints/socarxiv/j7qta/

Vancouver

Merhout F, Breznau N. How Many Replicators Does It Take to Achieve Reliability: Investigating Researcher Variability in a Crowdsourced Replication. 2021. 37 p.

Author

Merhout, Friedolin ; Breznau, Nate. / How Many Replicators Does It Take to Achieve Reliability : Investigating Researcher Variability in a Crowdsourced Replication. 2021. 37 p.

Bibtex

@book{5e1ba64f31b3478687eb94e8a3633066,
title = "How Many Replicators Does It Take to Achieve Reliability: Investigating Researcher Variability in a Crowdsourced Replication",
abstract = "The paper reports findings from a crowdsourced replication. Eighty-four replicator teams attempted to verify results reported in an original study by running the same models with the same data. The replication involved an experimental condition. A “transparent” group received the original study and code, and an “opaque” group received the same underlying study but with only a methods section anddescription of the regression coefficients without size or significance, and no code. The transparent group mostly verified the original study (95.5%), while the opaque group had less success (89.4%). Qualitative investigation of the replicators{\textquoteright} workflows reveals many causes of non-verification. Two categories ofthese causes are hypothesized, routine and non-routine. After correcting non-routine errors in the research process to ensure that the results reflect a level of quality that should be present in {\textquoteleft}real-world{\textquoteright} research, the rate of verification was 96.1% in the transparent group and 92.4% in the opaque group. Twoconclusions follow: (1) Although high, the verification rate suggests that it would take a minimum of three replicators per study to achieve replication reliability of at least 95% confidence assuming ecological validity in this controlled setting, and (2) like any type of scientific research, replication is prone to errors that derive from routine and undeliberate actions in the research process. The lattersuggests that idiosyncratic researcher variability might provide a key to understanding part of the “reliability crisis” in social and behavioral science and is a reminder of the importance of transparent and well documented workflows.",
author = "Friedolin Merhout and Nate Breznau",
year = "2021",
language = "Dansk",

}

RIS

TY - RPRT

T1 - How Many Replicators Does It Take to Achieve Reliability

T2 - Investigating Researcher Variability in a Crowdsourced Replication

AU - Merhout, Friedolin

AU - Breznau, Nate

PY - 2021

Y1 - 2021

N2 - The paper reports findings from a crowdsourced replication. Eighty-four replicator teams attempted to verify results reported in an original study by running the same models with the same data. The replication involved an experimental condition. A “transparent” group received the original study and code, and an “opaque” group received the same underlying study but with only a methods section anddescription of the regression coefficients without size or significance, and no code. The transparent group mostly verified the original study (95.5%), while the opaque group had less success (89.4%). Qualitative investigation of the replicators’ workflows reveals many causes of non-verification. Two categories ofthese causes are hypothesized, routine and non-routine. After correcting non-routine errors in the research process to ensure that the results reflect a level of quality that should be present in ‘real-world’ research, the rate of verification was 96.1% in the transparent group and 92.4% in the opaque group. Twoconclusions follow: (1) Although high, the verification rate suggests that it would take a minimum of three replicators per study to achieve replication reliability of at least 95% confidence assuming ecological validity in this controlled setting, and (2) like any type of scientific research, replication is prone to errors that derive from routine and undeliberate actions in the research process. The lattersuggests that idiosyncratic researcher variability might provide a key to understanding part of the “reliability crisis” in social and behavioral science and is a reminder of the importance of transparent and well documented workflows.

AB - The paper reports findings from a crowdsourced replication. Eighty-four replicator teams attempted to verify results reported in an original study by running the same models with the same data. The replication involved an experimental condition. A “transparent” group received the original study and code, and an “opaque” group received the same underlying study but with only a methods section anddescription of the regression coefficients without size or significance, and no code. The transparent group mostly verified the original study (95.5%), while the opaque group had less success (89.4%). Qualitative investigation of the replicators’ workflows reveals many causes of non-verification. Two categories ofthese causes are hypothesized, routine and non-routine. After correcting non-routine errors in the research process to ensure that the results reflect a level of quality that should be present in ‘real-world’ research, the rate of verification was 96.1% in the transparent group and 92.4% in the opaque group. Twoconclusions follow: (1) Although high, the verification rate suggests that it would take a minimum of three replicators per study to achieve replication reliability of at least 95% confidence assuming ecological validity in this controlled setting, and (2) like any type of scientific research, replication is prone to errors that derive from routine and undeliberate actions in the research process. The lattersuggests that idiosyncratic researcher variability might provide a key to understanding part of the “reliability crisis” in social and behavioral science and is a reminder of the importance of transparent and well documented workflows.

M3 - Rapport

BT - How Many Replicators Does It Take to Achieve Reliability

ER -

ID: 292057383