This post originally appeared in the Guardian

Psychology is evolving faster than ever. For decades now, many areas in psychology have relied on what academics call “questionable research practices” – a comfortable euphemism for types of malpractice that distort science but which fall short of the blackest of frauds, fabricating data.

But now a new generation of psychologists is fed up with this game. Questionable research practices aren’t just being seen as questionable – they are being increasingly recognised for what they are: soft fraud. In fact, “soft” may be an understatement. What would your neighbours say if you told them you got published in a prestigious academic journal because you cherry-picked your results to tell a neat story? How would they feel if you admitted that you refused to share your data with other researchers out of fear they might use it to undermine your conclusions? Would your neighbours still see you as an honest scientist – a person whose research and salary deserve to be funded by their taxes?

For the first time in history, we are seeing a coordinated effort to make psychology more robust, repeatable, and transparent. Now, in 2014, these reforms aren’t so much in the wind as they are in the room. As Pete Etchells put it last week, we may well be in a crisis but there simply is no better time to be a research psychologist than right now.

Replication heat

Reform is invigorating but it can be painful for researchers who feel caught in the crossfire. Last month, the journal Social Psychology reported an ambitious initiative to reproduce a series of influential discoveries reported since the 1950s. Many of the findings could not be replicated, and in most cases these non-replications were met with cordial interactions between researchers. However, Dr. Simone Schnall from the University of Cambridge argued that her work on social priming was treated unfairly. In a remarkable exchange now coined “repligate”, Schnall claimed that she was bullied by those who sought (unsuccessfully) to replicate her findings and that the journal editors who agreed to publish the failed replications of her work behaved unethically. She wrote, “I feel like a criminal suspect who has no right to a defense and there is no way to win: The accusations that come with a ‘failed’ replication can do great damage to my reputation, but if I challenge the findings I come across as a ‘sore loser.’”

For many psychologists, the reputational damage in such cases is grave – so grave that they believe we should limit the freedom of researchers to pursue replications. In a recent open letter, Nobel laureate Daniel Kahneman called for a new rule in which replication attempts should be “prohibited” unless the researchers conducting the replication consult beforehand with the authors of the original work. Kahneman says, “Authors, whose work and reputation are at stake, should have the right to participate as advisers in the replication of their research.” Why? Because method sections published by psychology journals are generally too vague to provide a recipe that can be repeated by others. Kahneman argues that successfully reproducing original effects could depend on seemingly irrelevant factors – hidden secrets that only the original authors would know. “For example, experimental instructions are commonly paraphrased in the methods section, although their wording and even the font in which they are printed are known to be significant.”

If this doesn’t sound very scientific to you, you’re not alone. For many psychologists, Kahneman’s cure is worse than the disease. Dr. Andrew Wilson from Leeds Metropolitan University points out that if the problem with replication in psychology is vague method sections then the logical solution – not surprisingly – is to publish detailed method sections. In a lively response to Kahneman, Wilson rejects the suggestion of new regulations: “If you can’t stand the replication heat, get out of the empirical kitchen because publishing your work means you think it’s ready for prime time, and if other people can’t make it work based on your published methods then that’s your problem and not theirs.”

How does psychology’s replication crisis appear to other sciences?

In one sense, it is difficult not to find this debate embarrassing. What other area of science indulges in such navel-gazing as to even question the importance of replication? Where else is there such a fear of replication that the most senior figures in the field would seek to limit the freedom of one scientist to repeat the work of another? The idea that psychology should even need a “replication movement” is like a car manufacturer calling for the invention of the wheel – as ridiculous as it is redundant.

On the other hand, perhaps Psychology is just passing through a natural stage in its evolution as a science. This made me wonder: how do scientists in more mature fields view the current replication drive in psychology? Is the attempt to replicate in their fields ever regarded as an act of implied aggression?

Professor Jon Butterworth, head of the physics and astronomy department at University College London, finds this view of replication completely alien. “Thinking someone’s result is interesting and important enough to be checked is more like flattery.” For Butterworth, there is no question that the published methods of a scientific paper should be sufficient for trained specialists in the field to repeat experiments, without the need to learn unpublished secrets from the original authors. “Certainly no physicist I know would dare claim their result depended on hidden ‘craft’.”

Dr. Helen Czerski, broadcaster, physicist, and oceanographer at University College London, offers a similar perspective. In her field, contacting the authors of a paper to find out how to replicate their results would be seen as odd. “You might have a chat with them along the way about the difficulties encountered and the issues associated with that research, but you certainly wouldn’t ask them what secret method they used to come up with the results,” Czerski questions whether Kahneman’s proposed rules may breach research ethics. “My gut response is that asking that question is close to scientific misconduct, if you were asking solely for the purpose of increasing your chances of replication, rather than to learn more about the experimental issues associated with that test.”

At the same time, Professor Stephen Curry, a structural biologist and crystallographer at Imperial College London, points out that vague methods are not unique to psychology. “I haven’t come across the view that it would be impossible to write a full description because of ‘trade-craft’. Methods sections are often not adequate but in my view that is more down to laziness on the part of authors and reviewers.”

One thing seems clear – the culture of replication in the physical sciences is a world apart from psychology, and many years ahead. Dr. Katie Mack, astrophysicist at the University of Melbourne, says that in her field there are many situations where reproducing a result is considered essential for moving the area forward. “A major result produced by only one group, or with only one instrument, is rarely taken as definitive.” Mack points out that even findings that have been replicated many times over are valued, such as the Hubble constant, which describes the rate of expansion of the Universe. “Many groups have measured it with many different methods (or in some cases the same method), and each new result is considered noteworthy and definitely publishable.” Like Czerski and Butterworth, Mack is adamant that a published method section should contain enough detail to repeat an experiment, without needing to consult with anyone. “If it doesn’t, the paper will not be considered as good.”

Where next for psychology?

It seems clear that for psychology to advance to the next level, the act of replication needs to be regarded as a vital link in the scientific chain – not as a proof of truth but as a marker of credibility, a sign that the results are worth considering further in generating theory. Butterworth argues that psychologists must come to terms with the fact that “unreproduced and unreproducible results are basically worthless”, while Mack calls for greater skepticism around results that haven’t been reproduced. “I think this is the main driver in astrophysics – we know that one group can screw up, or a fluke result can happen, and we take independent verification to be much more important than just a solid statistical analysis.”

It is one thing to say that psychologists need to care more about reproducibility, but how do we make that happen within a juvenile academic culture that, above all, rewards novelty and creativity? Curry suggests that the answer may be to build independent collaboration into the process of discovery. Like the hunt for the Higgs boson, psychologists could form independent teams to tackle important novel questions, agreeing on exact methods beforehand. They would then conduct the studies independently, avoiding direct communication until the final outcome. “That way, both [groups] get the novelty bonus but you also get replication.” What if the groups get different results? This is not a problem, says Curry, because it “will attenuate the hype around single positive results and the competitive pressure might rein in the wilder flights of speculation.”

For Czerski, the problem in psychology may also be one of pride and the inability to separate the discovery from the discoverer. She notes how this has changed in physics. “We have moved on from the years when a lone scientist-hero makes significant individual contributions to science, and everyone (and their ego) needs to accept that. The prize is won when the whole field moves forwards.”

Psychology clearly has some growing up to do. Critics may argue that it isn’t fair to judge psychology by the standards of physics, arguably the Olympic athlete of the sciences. On the other hand, perhaps this is precisely the goal we should set for ourselves. In searching for solutions, psychology cannot afford to be too inward-looking, imagining itself as a unique and beautiful snowflake tackling concerns about reproducibility for the first time.

Above all, the way psychology responds to the replication crisis is paramount. Other sciences are watching us, as are the public. The last month has seen those who sought to replicate prior work – or bring in transparency reforms – subjected to a barrage of attacks from senior psychologists. They have been called “replication Nazis”, “second stringers”, “mafia”, and “fascists”, to name but a few. The fact that those at the top of our field feel comfortable launching such attacks highlights a pertinent irony. Despite all our claims to understanding human behaviour, psychologists stand to learn the psychology of actually doing science from our older cousins – physical sciences that haven’t studied psychology for a day. We would do well to listen.