• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Peer-reviewed "research"

Joined
2/7/08
Messages
3,261
Points
123

In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning discoveries from the 1990s and 2000s because there aren’t enough of them.

Of course, a lot of other stuff has changed since World War II. We did a terrible job running this experiment, so it’s all confounded. All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.

...

Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?

It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.

In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time. If reviewers were doing their job, we’d hear lots of stories like “Professor Cornelius von Fraud was fired today after trying to submit a fake paper to a scientific journal.” But we never hear stories like that. Instead, pretty much every story about fraud begins with the paper passing review and being published. Only later does some good Samaritan—often someone in the author’s own lab!—notice something weird and decide to investigate.

Why don’t reviewers catch basic errors and blatant fraud? One reason is that they almost never look at the data behind the papers they review, which is exactly where the errors and fraud are most likely to be. In fact, most journals don’t require you to make your data public at all. You’re supposed to provide them “on request,” but most people don’t. That’s how we’ve ended up in sitcom-esque situations like ~20% of genetics papers having totally useless data because Excel autocorrected the names of genes into months and years.

(When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”)
 
Peer-review of articles is one thing (too late). The root problem is the article itself and the rationale behind it.

Many articles in computational finance cannot be "audited" (incomplete context, algorithms, data). Reader requests for feedback on ML articles that claim 10.000 speedup meet with a wall of silence.

“It is better to solve one problem five different ways, than to solve five problems one way.”
― George Pólya

In many PDE/FDM articles it is one solution for all problems.
 
Last edited:
I did my Ph.D. in a traditional engineering discipline, had 40 SCI papers (20+ as first-author), reviewed more than 200 manuscripts (including a few high IF up to 20), and have been on the editorial board of 2~3 good journals (impact factor 3~6).
Peer-review does have those issues you mentioned, but still, that's how the top research progress gets published. I guess nowadays it just takes extra effort to find the gems among an ever-increasing amount of trash. This is probably why many researchers only read top journals.
For quant, I speculate that the IP issues and the nature of the business itself make it even harder for the public to access top knowledge. Who would want to share their secret formula?
Curious to hear others' thoughts.
 
What about research/IP funding from taxpayer's money.

40 papers is quite a hefty number. How many gems in there?
It's about 10 papers/year.
 
Sorry I wasn't clear - these 40 papers dated back to 10 years ago when I started my PhD. It is an accumulation of my entire research career from PhD to PostDoc to reserach role in a company...

In terms of how many of them are gems - perhaps 3-5 of them (using my own biased scale), mainly because they served some practical value to the industry.

I still peer-review manuscripts nowadays, but only limit to very close areas. Over the last few years I think there is an inflation in the impact-factor (hence more papers are being published, and the averaged reviewer's quality dragged down, etc)


What about research/IP funding from taxpayer's money.

40 papers is quite a hefty number. How many gems in there?
It's about 10 papers/year.
 
Back
Top