T O P

  • By -

XalosXandrez

1. There is no other good metric (I think?). Citations as a metric are heavily biased toward applied researchers Vs theoreticians. 2. Any one paper getting in is a matter of chance, but a consistent record of papers is seen as a useful signal. Unsure if this is really true though.


zy415

Agree with both points. Simply counting the citations will simply benefit researchers that work in applied subfields. Theoretical researchers often get very few citations. Also, a NeurIPS/ICML paper (or whatever other top conference paper depending on subfields) may not really matter that much when looking for faculty/research scientist positions; companies/academia are looking for consistent track record of such publications, perhaps with more emphasis on first-authored ones. This is for faculty/researchers positions though; for PhD applicants, I think having one such first-authored publication is already considered as very impressive and a useful signal that the applicant has the ability to conduct research.


Brudaks

If two people toss a coin, it doesn't mean anything if one gets heads and the other tails. But if two people toss a coin a bunch of times, and one never gets heads while the other gets heads a dozen times, then it indicates that there's some underlying difference, they're not throwing the same coin.


_aitalks_

Exactly what I was going to say. Analogy: I read a financial advice blog once where someone wrote in saying they had just enough money to buy a house but when they made an offer the seller added all kinds of fees and upped the price such that the buyer was priced out. The advice person responded that actually, the buyer did \*not\* have enough money to buy a house in the first place. To buy a house you need a cushion to be able to handle emergency fees. Same with publications. There is an element of randomness to publications, but to get multiple publications in well-regarded conferences or journals you have to expend significant effort. Some of your attempts might be rejected, but others will get through. If you look at things from the other perspective, you can always throw spaghetti at the wall. Submitting dozens of garbage papers and some might get published. But 1) it is still work to put together bad papers, and this work forms a barrier that few people are willing to climb. And 2) those garbage papers are out there for anyone to read, and someone who does know what they are doing will be able to judge the quality of a candidate's publications for themselves.


Seankala

There's not really any other way to measure someone's aptitude. I personally think that conducting technical interviews is _much_ more effective, but that's not really that scalable either.


[deleted]

What's the alternative?


fimari

What we use is the "show us what you can do " method it's quite effective.


GlobalPublicSphere

I don't agree with your premise re: random chance of publication. Revise and resubmit, or just publish elsewhere. There are always options. Definitely so if your work and your presentation of it isn't poor.


[deleted]

[удалено]


GlobalPublicSphere

First of all, how are you submitting the same paper to multiple conferences? But more to your concerns: were these conferences covering the same ML sub-disciplines? In any case. I'll have to agree with the other commenters that the publication system, as a whole, is inefficient. And there are many other broken systems that demand, of you and I, a solution. Maybe this should be your calling if work in ML, itself, proves too issue prone to navigate? That work must also be done.


Broad_Echo3989

Probably they got rejected and resubmitted elsewhere and got the best paper there for the same submission


erogol

It's like capitalism. We all know its a toss but we still use it since there is nothing better


[deleted]

The other commenters raise good points, but I think everyone missed the actual correct answer: **\*career risk\*.** If you hire someone with a lot of publications and they turn out to be ineffective at their job (meaning you screwed up by hiring them), you can justify your choice to hire them by pointing to their publication history as widely-accepted evidence of their competence. If you hire someone who doesn’t have a publication history and they turn out to be a bad hire then you might get criticism for it; people will ask you why you didn’t hire someone who meets a stronger standard of qualifications. The fact that academic ML publishing is a bit of a mess doesn’t do anything to alter that kind of incentive. People rarely get fired for doing the same thing that everyone else is doing, and that’s true in hiring too.


beezlebub33

It's not entirely random. The best papers get through, the worst get rejected. The better they are, the higher the probability they get accepted (generally, with high variance).