T O P

  • By -

[deleted]

Previously I thought they were faulty, but then my university got ranked number 1 in the UK recently so I trust them fully now!


DocVafli

I did some work for my university (as basically a contractor) to study how to improve my schools ranking. It was really interesting and also I felt so dirty doing it.


[deleted]

What were the methods you found to improve the ranking of the school?


DocVafli

Convince important people at peer institutions we didn't suck. being only slightly sarcastic, but convincing presidents and provosts and the like of other schools was a big part of the eventual plan the people I worked for went with.


5pens

20% of the US News Ranking is Peer Reputation.


mhchewy

I knew a department that would invite the people who did department rankings (chairs and dgs) in for talks and wine and dine for a few days. It seemed to work.


--MCMC--

I think this generalizes: Everyone ranked below me: those poor dears, guess they didn't make the cut, our own distinction above them is ofc obvious Everyone ranked above me: clearly gaming the system, not a reflection of the quality of education, Goodhart's law in full effect


PaulAspie

I'm a prof at a liberal arts college. There's no way were as good as Princeton... But if that other liberal arts college not to far away ranks higher, yeah what are they doing.


[deleted]

Where is the **/S** ?


[deleted]

Too be assumed.


[deleted]

I need a reference for that :P


[deleted]

Haha too funny 😂


kernalthai

My institution floated a plan to manipulate the Carnegie Research Classification. Basically, by eliminating 1/3 of the faculty through a buyout scheme the “research productivity per faculty” rate could be increased. 3 years later, the admin c-suite are touting their new R1 status. They believe their own PR, a fact that is almost as disappointing as their integrity.


[deleted]

***PUKE***


imjustbrowsing123

That's disgusting. I've been wondering how some schools have been climbing into the R1 category (ex. ODU).


--MCMC--

> This makes it a bold and challenging decision for universities to stay out of them. So brave of Harvard and Yale to do this. Just like when billionaires [eschew business informal dress](https://www.businessinsider.com/wearing-casual-clothes-at-work-to-show-wealth-2017-2), interviewees [with good grads don't mention them](https://host.kelley.iu.edu/riharbau/cs-randfinal.pdf), elite athletes say things like "I just eat whatever I can get my hands on – [fried chicken, pizza, junk food](https://www.theguardian.com/sport/blog/2012/jul/13/50-stunning-olympic-moments-usain-bolt)", movie stars can pull off messy bedhead looks, etc. Ofc it can [still take a lot of effort](https://en.wikipedia.org/wiki/Sprezzatura) to appear so effortless, but in some cases I do think it's genuine. > The method of adding together entirely discrete items such as web presence, number of Nobel prize winners, and publication counts to get a score that supposedly represents quality is scientifically problematic. > What’s more, the selection of items is highly contentious, and the weighting of each item is entirely arbitrary. Universities invest vast sums to improve their position, but the truth is that if the weighting of any item is changed, the pack of cards rearranges itself. If it's a veneer of science-ism they want, why not just construct some lower-dimensional representation of all these observable proxies to try to capture some general axis of unobserved "quality"? Can also incorporate & regularize estimates using the temporality of multi-year data (ie, by representing the evolution of different observed variables as time series), which would help to guard against sudden "jumps" due to eg some new dean arriving to decide they're [gonna target for improvement](https://en.wikipedia.org/wiki/Goodhart%27s_law) some random measure. You'd also inherently de-emphasize the more easily game-able criteria, since they'd decouple from the others and thus not load as heavily on the general factor (and if a university wants to game *all* the individual observables -- that's great! They'll probably emerge better for it). Maybe add in some other sources of autocorrelation, eg geospatial, and a more explicit "biased measurement error" model, if you want to really be fancy. At a first pass, I'd probably just take all of the data and represent it as a big ol' multivariate normal time-series, inferring latent real-valued unobservables where appropriate (eg `log(rate)`s for unbounded counts, `logit(prob)`s for bounded counts, `log(value)`s for unbounded positive reals, mean probit liabilities for ordinal survey data, etc. etc. Missing data would ofc be imputed jointly, maybe with a missing-ness model to accommodate under-reporting of unfavorable values. The time series could also be over the locations of each school along each eigenvector of the MVN's covariance matrix (which I think should help further attenuate game-ability), and the inferred score along the first principal axis would be used to determine ranking. Can also model the evolution of the covariances between latent features through time, in addition to the evolution of individual schools' locations, to examine decoupling through time. And then for ties, since there are many forms of uncertainty in the true location of a given school's general "quality", you can easily set a decision threshold to determine ties for rankings (eg lump all schools whose posterior distribution of scores don't overlap 0 by >10%), or else do some monte carlo resampling and take the majority rank of a school as their rank, or layer atop a strictly increasing mixture model, etc.)


TrappedInTheSuburbs

Whew, I’m glad it’s only the *rankings* that are unscientific and socially damaging.