Why Rankings Disagree

If MBA rankings all agreed, you'd only need one. They don't agree because they measure different things. US News weights peer assessment and recruiter surveys. Bloomberg Businessweek emphasises student satisfaction and compensation. The Financial Times factors in research output and international diversity. Each methodology produces a different ordering, and applicants who don't understand this chase rankings rather than fit.

US News & World Report

The most widely cited ranking in the US. Its methodology weights:

  • Quality assessment (40%) — Peer assessment by business school deans and MBA directors (25%) plus recruiter assessment (15%).
  • Placement success (35%) — Employment rates at graduation and three months post-graduation, plus starting salary and bonus.
  • Student selectivity (25%) — Mean GMAT/GRE scores, mean undergraduate GPA, and acceptance rate.

What it rewards: Established reputation, high incoming scores, and strong immediate employment outcomes. Schools that have been ranked highly tend to stay ranked highly because peer assessment is self-reinforcing.

What it misses: Long-term career outcomes, student experience, programme innovation, and value for money.

Bloomberg Businessweek

Bloomberg's ranking emphasises the student and alumni experience:

  • Compensation (35.7%) — Starting salary adjusted for industry and pre-MBA pay.
  • Networking (26.2%) — Alumni survey on network quality and engagement.
  • Learning (25.8%) — Student survey on teaching quality, curriculum, and programme delivery.
  • Entrepreneurship (12.3%) — Alumni-founded companies, funding raised.

What it rewards: Schools where students and alumni report high satisfaction and strong post-MBA compensation relative to pre-MBA compensation.

What it misses: Input quality (GMAT, selectivity) and academic research.

Financial Times Global MBA Ranking

The FT ranking has the broadest global scope:

  • Career progress (weighted salary, salary increase) — ~40% of the ranking.
  • Diversity (gender, nationality) — Programmes with more international students and women rank higher.
  • Research — Faculty publication output in top journals.
  • International experience — Study abroad, international faculty, international board members.

What it rewards: Schools with strong international orientation, research output, and alumni salary growth. European schools tend to rank higher here than in US-centric rankings.

What it misses: US-specific outcomes, recruiter perception, and programme size effects.

What No Ranking Captures

  • Fit — The most important factor in your MBA experience. A collaborative person at a hyper-competitive school (or vice versa) will be miserable regardless of ranking.
  • ROI by individual — A full scholarship at a T-25 school may have better personal ROI than full-price M7. Rankings don't model this.
  • Industry-specific strength — Rankings average across all career outcomes. If you want healthcare management, a school ranked 30th overall but with the best healthcare programme is a better choice than the #5 school.
  • Geography — Regional network strength, proximity to target employers, and local alumni density aren't captured in any ranking.
  • Part-time and online quality — Most rankings focus exclusively on full-time programmes. Part-time and online rankings exist but are less developed.

How to Use Rankings

Rankings are a starting point for research, not a decision framework. Use them to identify a universe of schools worth investigating. Then go deeper: look at employment reports, talk to alumni, visit campuses, and model your own ROI.

AdmitBase complements rankings with personalised match scores — showing you where your GMAT, GPA, and experience actually make you competitive, rather than where a school sits on an aggregated list.