SPSP TOP II Task Force Provides Resources and Recommendations for Authors

New resources are available to support SPSP researchers looking to incorporate open science practices in their work. The website is designed to be a one-stop-shop for researchers trying to figure out:

  • How to navigate common hurdles to sharing data, materials, and code.
  • How to select an informative power analysis for a given study and how to write it up in a manuscript.
  • How to describe preregistered elements of a study when writing up a manuscript (with examples).
  • How to choose a trusted data repository that works for your needs.

If you’re submitting to PSPB, you’ll notice that the submission guidelines have been updated to encourage transparency and make manuscripts more informative. Some of the major changes include:

  • Consistent with SPSP’s updated data sharing policy, authors must post the following in a trusted repository (see list here) and link to them in the manuscript: Materials, analysis code, any data that is not already publicly accessible, and a codebook for interpreting the data file describing all variables and how they are coded.
  • Authors must report at least one power analysis for each study when using quantitative methods  (examples here).
  • Authors must transparently report key methodological details for each study, including how sample size was determined, all manipulations, measures, and exclusions, and whether study and analysis plan preregistration exists or not (examples here).
  • PSPB will now consider manuscripts that directly (or closely) replicate the procedures of studies previously published at PSPB.

The new resources were developed by the SPSP TOP Level II Task Force, which was created in March 2020 and charged with building on work by the SPSP Publications Committee to develop (a) a set of specific recommendations for how to effectively and efficiently implement TOP Level II guidelines at PSPB and (b) concrete resources for authors and editors to do so.

The task force team included Jin Xun Goh (Colby College), Rick Klein (Tilburg University), Courtney Soderberg (Center for Open Science), and Gregory Webster (University of Florida), who all deserve a great deal of thanks for their hard work throughout 2020. The team was chaired by SPSP Board Member Alison Ledgerwood (UC Davis).

The task force reached out to journal editors who had already adopted TOP Level II guidelines in whole or in part to inform possible strategies for effective implementation, and we also corresponded with TOP guideline experts to understand the guiding principle behind each guideline. Then, we incorporated these insights together with the Publication Committee’s prior research to develop specific recommendations for how SPSP journals could most effectively, efficiently, and inclusively implement TOP Level II guidelines, and we created tailored resources to support authors and editors in this process. We incorporated feedback from the Board and the new PSPB editor team, and we invite feedback anytime from the SPSP community on how to better support you in navigating open science practices.

 

Onward and Upward with Psychology

The first-ever meeting of the Society for Improving Psychological Science (SIPS)—even that name is uncertain—was radically different from a typical psychology conference. Attendees didn’t just learn about new research on how the scientific process can be improved, we worked for three days to try to immediately and tangibly improve psychological science.

The feel of the meeting—like the feel of the Center for Open Science (COS), which hosted it—was that of a tech start-up. Brief talks were given by researchers on improving science, but the bulk of the agenda was devoted to completing group projects intended to improve scientific practice and norms.

How can we improve teaching and training of psychology? Given how quickly the field is moving with regard to methods and proper interpretation of statistics, what can instructors, graduate students, and active researchers do to keep up?

How can journals and societies improve their practices to encourage open science?

How can we improve hiring and promotion—and better acknowledge contributions to science (like providing materials, code, or expertise) that don’t always show up in publication counts?

How can we make replication and data sharing normal parts of psychology?

Proposals were generated, explored, and either pursued or discarded in favor of more fruitful-seeming ideas. Brian Nosek, describing the approach used at the COS, encouraged participants to consider the “80/20 Rule”—often times 20% of the work on an aspirational goal will yield 80% of the benefits. The focus of our efforts was on small changes that could have big impacts.

The group was large, international, and inclusive—anyone who expressed interest in attending was invited, and everyone who requested assistance for housing or travel was given it. Given that founder Simine Vazire and the organizing committee planned the meeting in ~6 months, the turnout was impressive.

The core of the membership was attracted by the idea of helping to improve psychological science—although even the name Society for Improving Psychological Science was debated, because some members worried that it implied psychology needs improvement because it is deficient.

Of course, the idea that psychology needs improvement is something many in social and personality psychology have encountered, after several high profile failures to replicate studies (RPP[1]; Ego Depletion[2]; Power poses[3]; etc.). There is a growing consensus that psychology—and science in general—often reports results of research that are unreliable, because the pressures of publication on hiring and promotion create incentives to manipulate data and statistics inappropriately.

Yet what inspired me most about this meeting was seeing concrete action being taken. Articles from the 60’s[4], 70’s[5], 80’s, 90’s[6], and 2000’s[7] by prominent researchers all describe how a series of common methodological problems prevent psychology as field from accumulating an accurate and reliable body of knowledge. (For a disheartening but well-argued description of why, see the new paper “The Natural Selection of Bad Science.”[8]) Every decade, however, researchers seem to shake off these criticisms not by addressing them but by reinforcing the status quo—until now.

A critical mass of researchers has finally coalesced to address these issues and consider how we can do better psychology research. There isn’t just one solution, but many. And the solution can’t come from just one research group, but needs to be part of a larger discussion in the scientific community—a community of researchers that wants to proactively tackle problems with the way psychology is done.

To that end, I point readers to the SIPS page on the Open Science Framework (OSF), which contains open materials regarding all proposed changes. From collecting centralized repositories of materials, to encouraging journals to adopt open science badges, to creating a “Study Swap” where researchers can agree to replicate each others’ studies before publication—to name a few—there are many opportunities for interested people to get involved. Open Science refers not just to the methods, but to the inclusion of the entire scientific community. Psychologists, let’s all help each other improve.

Visit the SIPS page!

https://osf.io/jtcu9/


References:

[1] http://science.sciencemag.org/content/349/6251/aac4716

[2] http://www.psychologicalscience.org/redesign/wp-content/uploads/2016/03/Sripada_Ego_RRR_Hagger_FINAL_MANUSCRIPT_Mar19_2016-002.pdf

[3] http://www.ncbi.nlm.nih.gov/pubmed/25810452

[4] http://meehl.umn.edu/sites/g/files/pua1696/f/074theorytestingparadox.pdf

 

[5] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.8918&rep=rep1&type=pdf

https://www.researchgate.net/profile/Gerd_Gigerenzer/publication/232481541_Do_Studies_of_Statistical_Power_Have_an_Effect_on_the_Power_of_Studies/links/55c3598c08aeb975673ea348.pdf

[6] http://meehl.umn.edu/sites/g/files/pua1696/f/144whysummaries.pdf

http://psych.colorado.edu/~willcutt/pdfs/Cohen_1990.pdf

https://www.mpib-berlin.mpg.de/volltexte/institut/dok/full/gg/ggstehfda/ggstehfda.html

http://meehl.umn.edu/sites/g/files/pua1696/f/169problemisepistemology.pdf

 

[7] http://pubman.mpdl.mpg.de/pubman/item/escidoc:2101336/component/escidoc:2101335/GG_Mindless_2004.pdf

http://www.ejwagenmakers.com/2007/pValueProblems.pdf

 

 

Adeyemi Adetula

Adeyemi Adetula is a doctoral student at the Université Grenoble Alpes, France, and a member of the CORE Lab. He is working on increasing the generalizability of psychology with a focus on Africa. He completed his earlier education in Nigeria where he earned a bachelor’s degree and master’s degree in psychology and taught psychology for six years. He is currently leading a series of multisite studies to replicate African effects among African, European, and North American populations. He is an advocate of adopting open science in Africa. 

What led you to choose a career in personality and social psychology?

I'm passionate about psychology broadly. Even though I had a master's in forensic psychology, my then master's advisor Dr. Ojiji, a social psychologist, spurred my interest to work on social values and personality traits. Fortunately, my PhD co-advisors, Hans IJzerman and Patrick Forscher, both social psychologists, agreed to supervise my PhD in social and experimental psychology. Hopefully, I will have a great career in social psychology if my co-advisor, Dana Basnight-Brown, a cognitive psychologist, motivates me enough not to become a cognitive psychologist.

Briefly summarize your current research, and any future research interests you plan to pursue.

My current research is focused on the generalizability of psychology and improving Africans' participation in global research. I'm a doctoral student at the Université Grenoble Alpes, France, working on replicating African effects in WEIRD populations to understand the extent to which these findings generalize to these populations. Aside from testing African claims' generalizability via cross-cultural replications, working with Africans also allows us to understand the barriers to African science and how to promote best practices in African research. We share our underlying philosophy in a recent commentary in Nature Reviews Psychology (Adetula et al., 2022; preprint available here).

Our approach is to conduct two multisite Registered Replication Report studies. First, the CREP (Collaborative Replication and Education Project) Africa study is a training project to prepare African collaborators on open science practices via 1) tutorial articles and CREP training videos and 2) conducting an Africa-wide CREP replication study. Second, the CPA project prepares African collaborators for the ManyLabs Africa study, a replication study of three effects originally discovered in African countries, which will be tested among European, Northern American, and African populations. By doing so, we can determine whether the effects that were discovered in Africa replicate in Africa, and further, whether they generalize beyond the continent. In addition, we try to identify barriers to doing rigorous research with our African colleagues.

One considerable barrier to doing rigorous research I have faced myself during the PhD. I was promised a grant from a Nigerian institute, called TETFund, which was vetted by a French organization called Campus France. For more than a year, I did not receive any funding and neither TETFund nor Campus France followed up on their promises. I feel intense gratitude however for the kind support from social and personality psychologists, without whose donations I would not be able to finish my PhD  (you can read the full story on my GoFundMe or Patreon page).

Why did you join SPSP?

I joined SPSP to 1) associate/network with colleagues from other world regions and 2) for up-to-date discoveries, discussions, and happenings in personality and social psychology.

What is your most memorable SPSP Annual Convention experience?

My first and only participation was in the 2021 remote conference. I successfully organized a hackathon and workshop to train 15 African researchers in open science initiatives, tools, and practices. Though it was logistically challenging to get African researchers to participate in this conference, it was fulfilling to contribute to improving African researchers' capacity. You can read more about the event here.

How has being a member of SPSP helped to advance your career?

First, as an early career African researcher, an affiliation with an international association provides a platform to network with colleagues from other regions of the world. Second, as an advocate for better African participation in global research, I was awarded the International Bridge-Building Award, an SPSP funding program to promote diversity and participation of scholars from underrepresented populations. With this funding, we provided a one-year internet subscription to 15 African researchers to allow Africans to participate in a hackathon and workshop on creating open science syllabi for African research methods courses. Also, beneficiaries got conference fee waivers and received a one-year membership subscription. However, there is still more they can do, such as moving the conference out of North America and diversifying to whom they give awards. I hope to achieve this through my role as an advisory board member of the SPSP Anti Colorism/Eurocentrism in Methods and Practices (ACEMAP) Task Force, a platform to promote inclusion and also gain valuable experience for my career development.

Do you have any advice for individuals who wish to pursue a career in personality and social psychology?

With more devotion to big-team science and inclusivity, personality and social psychology can make more profound impacts globally. Our research should truly answer questions about what motivates humans across the world, not just undergraduate students at Stanford. We should not only diversify populations for more generalizable findings and theories, but we should also involve researchers from across the globe—including Africa—to ask questions that researchers from the U.S. and Europe may not think of or which may not support their communities. Supporting African researchers is not only the moral thing to do, it is also worthwhile from an intellectual perspective.

Outside of psychology, how do you spend your free time?

I love listening to Yoruba songs, rhythms, and beats. Curious enough?! Check out this documentary on a Yoruba drum. Also, I like to play sports and spend time with friends and family.

 

RMarkdown: What Are You Waiting For?

In a test of the open science movement in psychology, Tom Hardwicke, who has been a postdoctoral scholar at the Meta-Research Innovation Centers at Stanford University and at Charité – Universitätsmedizin in Berlin, Germany embarked on a project that should have been easy. Working with Mike Frank, David and Lucile Packard Professor of Human Biology and Director of the Symbolic Systems Program in the Psychology Department at Stanford University, he set out to download the data and reproduce the analyses from a number of published papers. It should have been easy because these papers upheld the principles of open science by making their data freely available online, and all Hardwicke and Frank were trying to do was re-create the numbers in these papers: step 1 of reproducibility.

What they found surprised them. They were only able to successfully reproduce the same numbers for even simple statistics about 1/3 of the time: 2/3 of the time, they couldn't get the numbers to match. What's more, they found substantial discrepancies between their results and published results for about 1/3 of the papers, and, in many cases, not even the original authors themselves could figure out where the numbers in the papers came from.

While none of these errors affected the major inferential conclusions of the paper, this is a telling—and troubling—example. The process of running statistics and then inserting those statistical findings into a research paper contains many opportunities for errors: typos, cutting and pasting the right number into the wrong place, or rounding incorrectly. Surely there must be a way to cut down on these kinds of sloppy errors that are mucking up our science?

Enter RMarkdown. RMarkdown is a type of document created in RStudio that integrates written text with chunks of code, allowing researchers to compile written parts of the manuscript—such as explanations of results—with statistical outputs, tables, and figures. The resulting integration can then be rendered into Word, PDF, or HTML formats.

When used effectively, this means that RMarkdown eliminates the need for copying and pasting statistics into results sections or tables. Not only can this critically reduce the number of sloppy errors that occur when transferring dozens of different numbers from a statistical computing software to a manuscript written in a Word document, but it can also save researchers time from this tedious task. For example, if you suddenly realize that one of your participants failed an attention check and needs to be excluded, you can use RMarkdown to exclude this participant and re-run the code so that the numbers and tables in the manuscript all update accordingly, rather than re-running the code and then editing the manuscript by hand.

In some of his writing on this topic, Frank sums up the benefits of this approach nicely: "Often we tend to think of there being a tension between the principles of open science and the researcher's own incentive to work quickly. In contrast, this is a case where I think that there is no tension at all: a better, easier, and faster workflow leads to both a lower risk of errors and more transparency."

Of course, if you're not familiar with using R to conduct statistical analyses or have never written a manuscript or results section in RMarkdown before, there are start-up costs to learning a new system. But one benefit of R in general is that not only is the software itself free and open-source, but there are a ton of online resources for using R, including the holy grail manual R for Data Science (free to use through this link under a Creative Commons license) and an upcoming SPSP-sponsored video series on R data analysis for social and personality psychologists. And time spent switching workflows to use RMarkdown now is time saved later from copying and pasting statistical results into manuscripts.

RMarkdown won't solve all our reproducibility problems: there's no protection against coding errors, questionable research practices, or HARKing (hypothesizing after the results are known). But it is a solid first step: ensuring that our research papers are free from sloppy copy-and-paste errors is the least we can do. And for social and personality psychologists looking to improve or even overhaul their research practices to align with the open science movement, sometimes even a small step can be a big win.

Being Open-minded About Open Science

Over the past decade, psychology has endured a reproducibility crisis. The reproducibility crisis brought into question the validity of many research findings, as well as shining a light on unfavorable or poor research practices in the field of psychology. As a result, there has been a rise of open science practices and techniques that promote transparency in research from start to finish. This article will outline three of the main techniques used to produce transparent research, as well as how to get started with them.

Preregistration

Preregistration means taking a moment before you begin your research, to describe your hypotheses, methods/research design, and analytic strategy. The goal of preregistration is to shape your research ideas from past theory, and to not mold a research article around the results you find. Preregistration can be flexible as long as you are honest. If you have already collected data, you could preregister your hypotheses and analytic strategy and upload them to an open repository like the Open Science Framework (OSF) before completing your analyses. Additionally, issues do arise during research, and creating amendments to a preregistration is okay, as long as you are transparent about what has changed and why. If you have not collected data, consider submitting a registered report, which allows your research to be accepted for publication in a peer-reviewed journal based on the merit of the theory and research design. If you want to get started with preregistration, check out the As Predicted template on OSF, which contains 8 essential preregistration questions. A bonus of preregistration is that it can help remind you of your hypotheses and intentions if you take a pause from a research project.

Open Materials

Being open with your research material means simply sharing your materials on an open repository, like OSF, GitHub, Dataverse, or within the supplemental materials of a manuscript. Placing your research materials in an open repository allows others to easily and directly replicate your work. This is an excellent and easy way to get started with open science research practices.

Open Data

The final open science practice is open data. This means sharing your data set on an open repository. Open data can help reviewers and other researchers check your work. Additionally, by publicly sharing your data set, it can be used for other publications or can provide resources to those who do not have access to collect their own data. Conversely, if your research is part of a larger data set that you would not like to share, you could make a smaller data file with only the variables of interest. Again, open science practices can be flexible and are meant to help researchers.

I hope this article outlines how easy and flexible open science practices can be. If you want to get started, you can create an Open Science Framework account for free at osf.io and consider using it to store preregistrations, open materials, or open data for your next research article. 

If you have any questions about open science please feel free to email me at [email protected]

Forging Meaningful Partnerships for Conducting Field Experiments

For personality and social psychologists, the entire world can be a laboratory.  Customer-facing businesses represent near-unlimited possibilities to apply psychological insights and experimentally test interventions. But how can social and personality psychology researchers form the critical partnerships that allow for such mutually beneficial collaborations?

Andrea Dinneen, Senior Behavioral Researcher at the Common Cents Lab in the Center for Advanced Hindsight shared strategies for finding field experiment partners as part of the Intervention Science pre-conference. Using these strategies, Dinneen and her colleagues have been able to recruit and partner with companies to conduct field experiments to improve financial well-being.

In order to efficiently recruit and vet potential field partners, Dinneen recommends organizing workshops. These trainings can share psychological insights on a particular topic – for example, one such workshop might share research on the psychology of budgeting, which financial organizations can then leverage to improve their products for their customers. Hand-selecting and inviting businesses to participate in these workshops not only provides value to the businesses, but also allows researchers to vet many potential partners in the same room at once. Together, businesses and researchers can explore whether a collaboration would be mutually beneficial. Further, these workshops help to establish reciprocity between researchers and potential field partners and showcase researcher credibility and expertise.

Once potential field experiment partners have been identified, how can researchers help get business-minded collaborators to think a little bit like experimental psychologists? For this, Dinneen and colleagues turn to the Open Science Framework (OSF). Modeled on the OSF’s documents for pre-registration, the Center for Advanced Hindsight team invites their collaborators to reflect on what the effects of a specific intervention might look like for their customers. These collaborators then think through how they would change their particular products or features of their products if they found these expected effects. This exercise not only helps these partners outside of academia think realistically and concretely about what effect sizes might look like, it also helps these partners pre-commit to applying insights from the study in actionable ways.    

Many personality and social psychology researchers want to utilize organizations and infrastructure already present in the community to test psychological theory and make real-world changes. But forging and maintaining field experiment collaborations can be challenging for researchers used to working only with colleagues in academia. Thinking carefully and innovatively about how to find and maintain these mutually beneficial partnerships is one way personality and social psychology researchers can help their work reach a larger, more diverse audience.


By: Kari Leibowitz. Kari Leibowitz is a 4th year PhD student in social psychology and a Stanford Interdisciplinary Graduate Fellow. Kari works in the Stanford Mind & Body Lab and her research involves leveraging psycho-social forces to improve healthcare experiences and outcomes.

Talk: “A Practical Guide to Finding Good Field Experiment Partners: Insights from Interventions to Improve Financial Wellbeing,” part of the Intervention Science preconference held Thursday, February 7, 2019.

Speaker: Andrea Dinneen, Senior Behavioral Researcher, Common Cents Lab, Center for Advanced Hindsight

The Center for Open Science at SPSP

By Dave Nussbaum

The Center for Open Science came to Austin to talk about open science and how it can simplify researchers’ lives. COS Project Coordinator Johanna Cohoon, reports…

The Center for Open Science (COS) returned home after SPSP with considerably lighter luggage. We spent our time in Austin spreading the message of open science, both vocally and sartorially, diminishing our t-shirt stock and building enthusiasm for practical ways to adopt transparent practices.

The Center is a non-profit technology startup in Charlottesville, Virginia. We seek to improve the openness, integrity, and reproducibility of science by creating tools for scientists so they can improve the quality and efficiency of their research. Our SPSP preconference and booth were designed to orient the SPSP membership regarding the COS (Get it? “Cause?”) and open science.

Like the rainbow array of our t-shirt selection, scientific workflows don’t all look alike. Not every research project follows the same format, but there are some common goals: to learn something and to share what was learned with others. Open practices can accelerate progress toward those goals. That is what we showed visitors to our booth at SPSP.

 In recognizing that not all workflows look alike, the tools the Center supports are customizable. Our flagship project, the Open Science Framework (OSF), enables researchers to organize projects and collaborate with others, but doesn’t require any specific format and doesn’t require making anything publicly viewable unless you want to.  The OSF supports private workflows by helping researchers share among collaborators, archive, and manage their research materials and project history.  If you make your research available to the public, that’s great!  Now other scientists can find and cite your datasets, materials, or whatever you choose to make available.

What we have found is that when discussion is fostered and resources are shared, amazing things can happen. Sharing ideas, materials, and data openly is a great way to draw on the expertise of others, form unexpected collaborations, and move research in directions that you hadn’t anticipated yourself.

For instance, with some feedback and discussion over dinner Thursday evening at SPSP, our team came up with the idea to enable posters and talks from the conference to be uploaded to the OSF with a simple email and shared with the whole community. We went from a question (“What can we do better?”) to an idea (“What if everyone could just email us their files?”) to prototyping and testing (“Whoops! Gotta fix that!”) to a product (“Ta-da!”) before SPSP was over. So far, more than 100 posters and talks from SPSP have been uploaded already, and they collectively have been downloaded nearly 800 times, preserving and disseminating the research presented at SPSPThat’s open science!

Besides building software to support scientific research, COS supports metascience projects – research about research practices. At SPSP, we shared a number of our crowdsourcing science projects that are actively recruiting collaborators.  For example, The Reproducibility Project: Psychology is recruiting volunteers to join our large scale replication effort.  So far, it involves more than 170 researchers, and grants are available for new teams! Other actively recruiting projects include Many Labs 2 and Many Labs 3: Participant Pool Edition, which follow up on the initial successful Many Labs project.  Finally, the Archival Project and CREP are projects that can involve undergraduate contributors and can be easily integrated into course instruction.  Like the OSF, the metascience projects demonstrate methodologies and investigate research questions that we hope will ultimately improve the openness, integrity, and reproducibility of scientific research.

For more information, to sign up for a metascience project, for answers to questions, or to see if you can get your hands on one of our famous t-shirts, drop us a line at [email protected].  Also, follow us on Twitter at @OSFramework or on Facebook.


Johanna Cohoon (@jlcohoon) recently graduated from the University of Virginia. After receiving her bachelor’s degree in Cognitive Science she joined COS where she coordinates the Reproducibility Project: Psychology and the Archival Project.