MetaScience

Is the research that needs to be done always the research that can be published? What are the structural incentives for us to perform different kinds of research? What happens in fields like AI, with competing incentives from academic and industry-sponsored research? How interdisciplinary are interdisciplinary fields, and how do citation networks work?

Is the research that needs to be done always the research that can be published? What are the structural incentives for us to perform different kinds of research? SODAS researchers work on improving peer review policies and contribute to the organization of conferences and workshops that help to advance the publication norms.

 

 

 

Peer review is a ubiquitous mechanism for publication quality control in academia, but it is increasingly clear that it cannot fully perform that function. SODAS researchers led a widely read study of reviewer biases in NLP venues, contributed to improved review policies at EMNLP 2020 and NAACL 2021, and co-organized a tutorial on peer review in EACL 2021.

- Cohen, K., Fort, K., Mieskes, M., Névéol, A., & Rogers, A. (2021). Reviewing Natural Language Processing Research. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts, 14–16. https://www.aclweb.org/anthology/2021.eacl-tutorials.4

- Rogers, A., & Augenstein, I. (2020). What Can We Do to Improve Peer Review in NLP? Findings of EMNLP, 1256–1262. https://www.aclweb.org/anthology/2020.findings-emnlp.112/ Featured in the Gradient and the Science Report (in Danish).

 

Publication of negative results is difficult in all fields, but publishing only positive results presents a misleading picture of the state of knowledge, contributes to reproducibility crisis, and wastes a lot of time when the researchers repeat experiments already known to not be promising. SODAS researchers co-organized the First and Second Workshop on Insights from Negative Results in NLP (co-located with EMNLP 2020 and 2021). The third iteration of this workshop has been accepted for ACL 2022. 

SODAS also participated in the organization of ICLR 2021 workshop "Beyond Static Papers: Rethinking How We Share Scientific Understanding in ML", dedicated to innovative proposals for interactive, multimedia-rich, dynamic ways to present and share research. 

Panel at "I Can't Believe It's Not Better" Workshop (co-located with NeurIPS 2021). Emtiyaz Khan (RIKEN), Atoosa Kasirzadeh (DeepMind), Suresh Venkatasubramanian (University of Utah), Anna Rogers (SODAS. https://i-cant-believe-its-not-better.github.io/neurips2021/

Sedoc, J., Rogers, A., Rumshisky, A., & Tafreshi, S. (Eds.). (2021). Proceedings of the Second Workshop on Insights from Negative Results in NLP. Association for Computational Linguistics. https://aclanthology.org/2021.insights-1.0 

Murthy, K., Samiei, M., Kusupati, A., Considine, B., Tabb, A., Rogers, A., Mehta, B., Khetarpal, K., Hooker, S., Maharaj, T., Parikh, D., Nowrouzezahrai, D., Bengio, Y. Proceedings of  ICLR 2021 workshop "Beyond static papers: Rethinking how we share scientific understanding in ML" https://rethinkingmlpapers.github.io/

Rogers, A., Sedoc, J., & Rumshisky, A. (Eds.). (2020). Proceedings of the First Workshop on Insights from Negative Results in NLP. Association for Computational Linguistics. https://www.aclweb.org/anthology/2020.insights-1.0/ 

 

Information will follow.

 

 

Funded by:

Copenhagen Centre for Social Data Science (SODAS)

Full project name: MetaScience - the science of science

Project start: 2020
Project end: 2022

Contact

Anna Rogers
Postdoc
SODAS

External researchers:

Name Title Phone E-mail
Isabelle Augenstein Associate professor at Computer Sciences, UCPH +4593565919 E-mail