Quant Rankings (2021) Risk.net + QuantNet, and More

Hi just posted this on LinkedIn, describing how multiple sources of quant rankings can be combined.

There are quite a few ranking methodologies (Risk.net and Quantnet.com and Topuniversities.com) available to assess Quant finance masters programs, here I tried to combine them all, there are about 40 or so columns from which I deconstruct three equal weighted values, (1) a reputation value, (2) remuneration value, and (3) quality value.

Top three are Baruch College, Princeton University, and ETH Zürich. The results should be reproducible (scrapers, code, and data included for rankings up to September 2021).

Baruch is a small program and performs really well on placement, they also run an affordable program, and their graduates go on to make a lot of money, hence they rank high on remuneration. Princeton has a high reputation, they perform less gerrymandering (i.e. consistent data reporting across ranking outlets), students have high GRE scores, students are more likely to accept their program, and Princeton has a low acceptance rate. ETH Zürich does extremely well on quality due to small classes, high teacher to student ratio, highly cited faculty, and they have many teachers from industry.

1. Baruch College, City University of New York
2. Princeton University (Bendheim Center for Finance)
3. ETH Zurich/University of Zurich
4. New York University (Courant Institute of Mathematical Sciences)
5. Columbia University (Columbia Engineering)
6. New York University (Tandon School of Engineering)
7. Cornell University
8. University of Toronto
9. Carnegie Mellon University
10. University of California, Berkeley (Haas School of Business)


 
nice attempt but a couple of points based on a cursory look:

1. something looks off about the data and how you've handled it. for example, see class sizes you have for these two programs:

Carnegie Mellon University 44​
New York University (Tandon School of Engineering) 23​
those are way off the mark. i know you use "coherency" to account for discrepancy in data but then you end up using, and giving significant weight, to bad data anyway (why use the two class sizes above from risk.net and not from quantnet?) it makes the whole exercise unreliable​
2. some of the feature weights seem really odd. for example, number of industry affiliated lecturers getting a significantly higher weight (13.33%) than things like employment rate (8.33%), %age of offer holders who enroll (5%), gre quant scores (6.66%) seems not quite right. sure, these are subjective and ymmv but i doubt if many people would agree with your model
 
nice attempt but a couple of points based on a cursory look:

1. something looks off about the data and how you've handled it. for example, see class sizes you have for these two programs:

Carnegie Mellon University 44​
New York University (Tandon School of Engineering) 23​
those are way off the mark. i know you use "coherency" to account for discrepancy in data but then you end up using, and giving significant weight, to bad data anyway (why use the two class sizes above from risk.net and not from quantnet?) it makes the whole exercise unreliable​
2. some of the feature weights seem really odd. for example, number of industry affiliated lecturers getting a significantly higher weight (13.33%) than things like employment rate (8.33%), %age of offer holders who enroll (5%), gre quant scores (6.66%) seems not quite right. sure, these are subjective and ymmv but i doubt if many people would agree with your model

Yeah, I am going to agree with most everything you said. From memory, I do think I used both class sizes are included? And I enjoy these discussions a lot -- what characteristics would be most essential to students in the long run, e.g., should the employment rate (remuneration) be more important than prestige (GRE quant scores), should receiving practical industry knowledge be more important than the percentage who enrolls.

If I had to state my personal opinion, I think the market should reflect the score, and I would just go with salary and employment rate, but this has its own problems.

Perhaps the program mostly consists of foreigners who first have to cross the visa divide, or perhaps there is a 'conspiracy' to place you in a high paying position after graduation, but with lower upward mobility, or rather an agreement to forgo a higher bonus for a high base salary, what if the faculty is simply lying about or massaging the numbers, what if they employ the unemployed students as TAs to indirectly improve the employment rate. I don't think any of these ratings should be taken too seriously (yet).

Edit: just a quick edit, it would be cool if someone can come up with a different weighting and to republish the edited colab here.
 
what characteristics would be most essential to students in the long run, e.g., should the employment rate (remuneration) be more important than prestige (GRE quant scores), should receiving practical industry knowledge be more important than the percentage who enrolls.
I think (quant/data science-related) employment rate >>>>> GRE quant scores. Generally, you would expect more prestigious programs (note program vs the overall university itself, which is what Topuniversities.com assesses, and why CMU MSCF and UCB MFE might be ranked significantly lower than deservedly so--I'm from neither program btw) to have better employment rate since they've had time to build up their reputation, have lots of resources available, and have a strong alumni base (who come back to hire from their program cuz they loved it there :cool:).
Perhaps the program mostly consists of foreigners who first have to cross the visa divide
For USA programs, you get up to 3 years to work via STEM OPT. Also, I think every program has mostly foreign students so that sort of eliminates visa divide as a barrier initially.
 
Nice! I am going to try and edit the weights a bit (or code it such that a user can enter weights as they desire) and see if the rankings change significantly. Thanks for the code, Derek!
 
Top