The first-ever meeting of the Society for Improving Psychological Science (SIPS)—even that name is uncertain—was radically different from a typical psychology conference. Attendees didn’t just learn about new research on how the scientific process can be improved, we worked for three days to try to immediately and tangibly improve psychological science.

The feel of the meeting—like the feel of the Center for Open Science (COS), which hosted it—was that of a tech start-up. Brief talks were given by researchers on improving science, but the bulk of the agenda was devoted to completing group projects intended to improve scientific practice and norms.

How can we improve teaching and training of psychology? Given how quickly the field is moving with regard to methods and proper interpretation of statistics, what can instructors, graduate students, and active researchers do to keep up?

How can journals and societies improve their practices to encourage open science?

How can we improve hiring and promotion—and better acknowledge contributions to science (like providing materials, code, or expertise) that don’t always show up in publication counts?

How can we make replication and data sharing normal parts of psychology?

Proposals were generated, explored, and either pursued or discarded in favor of more fruitful-seeming ideas. Brian Nosek, describing the approach used at the COS, encouraged participants to consider the “80/20 Rule”—often times 20% of the work on an aspirational goal will yield 80% of the benefits. The focus of our efforts was on small changes that could have big impacts.

The group was large, international, and inclusive—anyone who expressed interest in attending was invited, and everyone who requested assistance for housing or travel was given it. Given that founder Simine Vazire and the organizing committee planned the meeting in ~6 months, the turnout was impressive.

The core of the membership was attracted by the idea of helping to improve psychological science—although even the name Society for Improving Psychological Science was debated, because some members worried that it implied psychology needs improvement because it is deficient.

Of course, the idea that psychology needs improvement is something many in social and personality psychology have encountered, after several high profile failures to replicate studies (RPP[1]; Ego Depletion[2]; Power poses[3]; etc.). There is a growing consensus that psychology—and science in general—often reports results of research that are unreliable, because the pressures of publication on hiring and promotion create incentives to manipulate data and statistics inappropriately.

Yet what inspired me most about this meeting was seeing concrete action being taken. Articles from the 60’s[4], 70’s[5], 80’s, 90’s[6], and 2000’s[7] by prominent researchers all describe how a series of common methodological problems prevent psychology as field from accumulating an accurate and reliable body of knowledge. (For a disheartening but well-argued description of why, see the new paper “The Natural Selection of Bad Science.”[8]) Every decade, however, researchers seem to shake off these criticisms not by addressing them but by reinforcing the status quo—until now.

A critical mass of researchers has finally coalesced to address these issues and consider how we can do better psychology research. There isn’t just one solution, but many. And the solution can’t come from just one research group, but needs to be part of a larger discussion in the scientific community—a community of researchers that wants to proactively tackle problems with the way psychology is done.

To that end, I point readers to the SIPS page on the Open Science Framework (OSF), which contains open materials regarding all proposed changes. From collecting centralized repositories of materials, to encouraging journals to adopt open science badges, to creating a “Study Swap” where researchers can agree to replicate each others’ studies before publication—to name a few—there are many opportunities for interested people to get involved. Open Science refers not just to the methods, but to the inclusion of the entire scientific community. Psychologists, let’s all help each other improve.

Visit the SIPS page!

https://osf.io/jtcu9/


References:

[1] http://science.sciencemag.org/content/349/6251/aac4716

[2] http://www.psychologicalscience.org/redesign/wp-content/uploads/2016/03/Sripada_Ego_RRR_Hagger_FINAL_MANUSCRIPT_Mar19_2016-002.pdf

[3] http://www.ncbi.nlm.nih.gov/pubmed/25810452

[4] http://meehl.umn.edu/sites/g/files/pua1696/f/074theorytestingparadox.pdf

 

[5] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.8918&rep=rep1&type=pdf

https://www.researchgate.net/profile/Gerd_Gigerenzer/publication/232481541_Do_Studies_of_Statistical_Power_Have_an_Effect_on_the_Power_of_Studies/links/55c3598c08aeb975673ea348.pdf

[6] http://meehl.umn.edu/sites/g/files/pua1696/f/144whysummaries.pdf

http://psych.colorado.edu/~willcutt/pdfs/Cohen_1990.pdf

https://www.mpib-berlin.mpg.de/volltexte/institut/dok/full/gg/ggstehfda/ggstehfda.html

http://meehl.umn.edu/sites/g/files/pua1696/f/169problemisepistemology.pdf

 

[7] http://pubman.mpdl.mpg.de/pubman/item/escidoc:2101336/component/escidoc:2101335/GG_Mindless_2004.pdf

http://www.ejwagenmakers.com/2007/pValueProblems.pdf