What would it take for researchers to change their minds based on new findings? Do they impose higher requirements on studies that contradict their beliefs than on those that confirm them? How does the scientific community react when someone publishes results that contradict their previous publications? The contribution of the theoretical scientific discussion addresses these questions.
Whether in physics, medicine, or reconstructing the history of the Earth and the origin of the first life: in many fields of science there are competing theories and explanations where it is not yet clear which of them is correct and which is not. This also applies to psychological research. For example, there are different models to explain how people make decisions. Some researchers have spent most of their careers defending one of these models. They have conducted countless experiments on this matter, and have repeatedly discussed which model they prefer in their publications.
Against this background of being strongly influenced by a single perspective or model, can researchers still obtain an unbiased view of new findings? What if they suddenly discover, in a new experiment, that the competing model suits them better? How would the corresponding publication be viewed in the scientific community? Psychology professor Amy Iddles of the University of Newcastle in Australia raises these questions in a discussion article. “We should care about how our prior beliefs and expectations affect our interpretation of scientific findings, and perhaps also how much evidence we need to change our beliefs,” he writes.
To illustrate this, Edels devises a scenario in which fictional characters Alex and Bea have been researching the same phenomenon for years, with Alex representing Theory A and Bea representing Theory B. They now each independently receive the same experimental results that best fit Theory A. From Edel’s point of view, it would be reasonable for Bea to place higher demands on the evidence before admitting that new results support Theory A, while Alex might also feel that his opinion has been confirmed by weaker evidence.
What role does the context of the post play?
“Assuming Alex and Bea publish the new results: How will the scientific community evaluate the studies? Will Bea’s work receive more weight because she changed beliefs she had held for years?” “If we assume that the studies conducted in Alex and B’s laboratories were conducted with the same care, it seems unfair to attach more importance to the same results from one laboratory than to those from another,” Edels asks. This context likely plays a role. In evaluating the post.
For example, he cites a 2011 study in which 100 test subjects were asked to guess which of two curtains had a sexy picture behind it. While the random hit rate was supposed to be 50 percent, test subjects actually guessed correctly 53 percent of the time. The study’s team of authors interpreted this to mean that people were able to see into the future to some extent — a very bold interpretation. “Since then, there have been numerous attempts to replicate these results,” says Edels. “Here also the question arises to what extent the results are influenced by whether researchers believe in supernatural phenomena.”
Collect and include presets?
The results have not been confirmed in larger trials. Studies conducted in various laboratories, involving more than 2,000 people who were tested nearly 38,000 times, achieved a success rate of 49.89 percent. So the result only speaks about the random distribution of results. The researchers asked all members of the participating research teams whether they believed in supernatural phenomena. “Directly asking about default settings is an important step,” says Edles. “However, since this individual information can be distorted, complementary approaches may be useful.”
For example, you can use researchers’ previous publications to create a sort of score based on how convinced they are of a particular theory. Artificial intelligence, which can automatically evaluate hundreds of studies, could also help in the future, according to Edels. “If you record default settings using self-reports or future AI, new questions will emerge,” Edels says. “Should readers take into account information about researchers’ prior beliefs when interpreting scientific results? And if so, how? I leave this question for future discussions.”
Source: Amy Edels (University of Newcastle, Callaghan, NSW, Australia), Royal Society for Open Science, doi: 10.1098/rsos.231613
“Alcohol buff. Troublemaker. Introvert. Student. Social media lover. Web ninja. Bacon fan. Reader.”