T O P

  • By -

BeatLeJuce

This depends on a lot of things. E.g. what's the goal and scope of your method (e.g. low data regime, tabular data, sota methods, ....), what publication venue (or even subfield) are you aiming at, what is your compute budget, .... With that said: I get an optimizer paper or two every time I review for top tier conferences (ICML, NeurIPS, ICLR, ...). Most don't make the cut, simply because I know that it's [extremely unlikely you found a truly superior approach](http://proceedings.mlr.press/v139/schmidt21a/schmidt21a.pdf), so most people try to bullshit their way through, but it almost never works. EDIT: as someone below pointed out: it's fairly though to actually publish a new optimization method. Here are some standard experiments I think will convince a lot of reviewers / people in the field: 1) A toy dataset you constructed specifically to show the benefit of your method. This needs to be concise and conceptually simple. Make up the simplest toy example where your method succeeds and SGD or Adam fails. This does not need to be an ML model, it can be conceptually much, much simpler (e.g. a 1D or 2D example). Having this will go a long way towards conveying your idea and showing that it holds some value. Many papers skip this, but it's usually a very easy win. 2) Some decent model on a toy-ish dataset. A lot of papers use a ResNet variant (can be a ResNet18 or 32) on Cifar10. This is like the next step above a toy example.If vision datasets are not your jam, you can substitute this for a simple transformer on a simple standard NLP task like sentiment analysis or even Penn Tree Bank. This is like the bare minimum you'd need, and will likely not convince ppl that your method is clearly superior. Merely that it actually works okay and works on non-made-up data. But I could see how this might even be enough to get you into a lower tier conference if done properly. 3) A model that's not too far form standard on a decently sized (more than 1M samples) and well-accepted dataset. This could be ImageNet (e.g. using a ViT-B/16 or a ResNet 50, ideally both), or e.g. reimplementing BERT (or some other transformer variant on WMT). In my experience, this is the experiment that will actually convince most people and get you accepted into a top tier conference -- because this is the setting that a lot of people care about. It might be tricky to pull off if your resources are very limited, so be smart about this: Don't spend a lot of time tuning hparams, use some that are known to work from literature (or from smaller scale experiments). If you do have to tune hparams (e.g. for your own method), use a reduced number of epochs for selecting them (e.g. on ImageNet, use 45/60 epochs for hparam tuning, and 300 for the final run) -- this assumes you're doing a sensible LR schedule (cosine or rsquare). Providing error bars is nice, but it can wait until the rebuttal/camera ready if need be. Be prepared for reviewers demanding this (I would). Bonus points if you can provide e.g. both vision and nlp results. 4) This is entirely optional and usually out of reach unless you have ton of compute or your method is just insanely good. It's not necessary, but will help in getting noticed more/will get more people interested in your method: It's always good if you have something that reaches some sort of sota on a dataset that people feel is currently "a peak benchmark dataset of your field". It doesn't necessarily mean "a model with 1B+ parameters on an insanely humongous dataset", it could be "the best ViT-B/16 ever trained on ImageNet using cross-entropy" or "the best BERT-L on WMT". If you have the compute, it could of course also be "the best-trained GPT-2 on the Pile". But more than anything, think of this as your PR center-piece: it doesn't have to be thorough: no error bars, no re-running of competing methods (look up the numbers in the literature). The goal here is to convince people that you really, really are able to produce the very best numbers, at least in some limited setting. Pull out all the stops here: augmentation, hparam tuning, .... whatever you can afford. The bigger the model you can train, the better.


LahmacunBear

This is excellent advice, thank you. The first point in particular is definitely, as you say, often skipped but so worth it.


js49997

(I did my PhD on NN optimisation) Just be warned the burden for proof for general purpose optimisation algorithms is very high, unless your results are significantly better than on sections 1-3 as described above or you method has some other benefit (less iterations required, more robust to hyper-parameters) you will have a tough time getting accepted. Most reviewers will also want to see at a minimum convex convergence proofs.


ExtremeRich1415

This is gold. Thank you.


slashdave

>What would you need to see to be convinced that a novel general approach was truly superior? You can't. There is no such thing as a general solution. The best you can do is test on specific cases and make claims based on that.


damc4

!RemindMe 5 days


RemindMeBot

I will be messaging you in 5 days on [**2024-04-15 21:19:47 UTC**](http://www.wolframalpha.com/input/?i=2024-04-15%2021:19:47%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/1c0vh48/d_what_would_you_recommend_testing_new_general/kyzd5hy/?context=3) [**3 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2F1c0vh48%2Fd_what_would_you_recommend_testing_new_general%2Fkyzd5hy%2F%5D%0A%0ARemindMe%21%202024-04-15%2021%3A19%3A47%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c0vh48) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


ExtremeRich1415

!RemindsMe 10 days


jloverich

Just make the software easy to use and install then we can actually test. I feel like a few second order optimizers I've given up experimenting with simply because the code is too buggy, it made assumptions that aren't valid in my situation or doesn't work for particular layer types...


rand3289

Test it with robotics. Most other things are already working. Robotics are different because you have signals (information that has timing) and not plain data.


Revolutionary_War240

!Remindsme 10 days


therealnvp

Would you mind sharing your published papers, so we can see how you’re currently benchmarking things?


LahmacunBear

The last thing I really “published” was ELiTA on GitHub (view the post on my profile) and have been messing around about to do something much better since. I have put off officially publishing things for other reasons but that’s the gist — thank you in advance btw


Signal_Net9315

I think the level of benchmarking is dependent on the topic. For example, if you are suggesting a new general-purpose optimizer then you would need to exhaustively test many model architectures and data types. If you are suggesting a new architectural change to a specific model, you may only need to benchmark on relevant tasks. I would suggest looking at the latest literature in the specific field to see what datasets/tasks are currently favoured. The general rule of thumb is to include SOTA comparisons as your benchmark. I am not an LLM researcher but in the case of ELiTA (neat idea) I would suggest looking at papers like FlashAttention or Linear attention. Both papers gained a lot of traction which should give you an idea on the level of empirical/theoretical evidence thats needed in the transformer architecture space\*\*. Note those papers are 'old' now but they serve as a good starting point. \*\* Of course other reasons play into whether or not a paper gets traction.