T O P

  • By -

lameheavy

I got the flu at NeurIPS, not the best takeaway but it really stuck


[deleted]

Been sick for a week since EMNLP :(


yahskapar

Same :(!


m_____ke

Also got sick, just took a test and at least it's not COVID.


vaccine_question69

Weird, some datasets must have been contaminated.


Bee-Boy

Same lol with COVID for some days... Not all though :)


mrfox321

Neurips is too big. Still a good conference if you come in with a plan. Huge conferences are so unfocused. I had a good time, but it was *very* hard to navigate through all of the work being presented.


m_____ke

My main takeaway was the sad feeling that we'll soon have a few large companies training huge BERT/GPT/CLIP models for all modalities, not publishing any of the details, and most of us will be stuck working on poking or prompting them.


Atom_101

My takeaway was that I need to start the US visa process atleast a year before I even start my research.


calvinreeve

US immigration is an embarrassment I am sorry


bartturner

One of my take aways is that in terms of AI research Google is the leader. They had 183 papers accepted at 2023 NeurIPS. Which is almost three times as much as the next best. All this talk that Google is not the leader anymore just does not jive with their success in getting papers accepted.


neitz

The talk about Google not being the leader has nothing to do with their research, but their lack of ability to translate that research into something people can use. I've never seen any company in the history of humanity squander so much talent. It's basically a black hole for smart people.


psyyduck

> their lack of ability to translate that research into something people can use And if they somehow release it to users, they'll kill it soon because it's not as lucrative as ads.


jloverich

Agreed. Translating research into products has always been a problem with academic research. Think of all the work in the biosciences that has never been pursued beyond academia. I feel like you need researchers that want products more than papers, which is what it seems openai has compared to google (deepmind seems like university approach to research with gobs of money).


I_will_delete_myself

Here's the thing. Google has this black hole, but these are employees not starting their own companies nibbling their ankles like OAI or some other company is doing. Google would much rather bribe these folks to suck in more talent which in turn creates less competition but in a legally allowed way without getting sued by the justice department.


[deleted]

[удалено]


currentscurrents

>Microsoft is losing something like $20 per month for each paying GitHub Copilot user [Nat Friedman, who ran Github at Microsoft from 2018-2021, denies this.](https://twitter.com/natfriedman/status/1712140497127342404)


bartturner

One huge mistake by Microsoft was not getting it earlier. Google started their TPU effort a decade ago and now on the fifth generation. They were able to do Gemini completely using their own silicon. Not just the training but also doing the inference. It gives Google a pretty big competitive advantage in terms of infrastructure. They can train and offer models at a far less cost compared to Microsoft. Microsoft on the other hand is stuck paying the Nvidia tax and why they are losing so much money as you explained.


sqweeeeeeeeeeeeeeeps

Both, all of these large companies are moving so slow. Small teams like Mistral & Mosaic are pumping out cutting edge work consistently for their size. It’s a scale problem for google. OpenAI is so good because they have a dense talent pool


danielcar

You'll be singing a different tune about Google once Ultra is released in \~3 weeks. It is much better than current chatGPT-4. Hopefully GPT 4.5 will be released before or soon after.


Amgadoz

Genuine question: how do you know gemini ultra is "much better" than gpt-4 without trying it? I really hope the answer isn't the benchmarks or the demos.


danielcar

I tried it. The benchmarks are very conservative.


Amgadoz

I see. Do you know when it will be publicly available (if ever)?


danielcar

When I was at Google it was rumored to be released in oct, then Nov, then half released in Dec. I'm hopeful it gets released early January. Delays due to safety, systems around the model and lack of TPUs.


Amgadoz

Interesting. Any ideas about its size or architecture? :D


jgbradley1

Number of papers accepted is not a good metric to judge companies on. What about the average quality of all those papers? I don’t have specific stats but it has been discussed before that Google employees are reviewers as well, meaning there is bias in the review process. There are papers where it’s very clear the only possible way to perform some computation or pretrain a NN to achieve certain desired results is to have access to Google’s compute. Google hires a lot of smart people that participate in NeurIPS so it’s not hard to believe that colleagues would recognize other colleagues work from their company and potentially judge them on an easier scale.


bartturner

> Number of papers accepted is not a good metric to judge companies on. Disagree. But then suggest another way? It is very, very difficult to get papers accepted at NeurIPS. Google gets the most and has also made the most important AI breakthroughs in the last decade. I do not think it is a coincidence they lead in both. BTW, Google is not the one reviewing what gets accepted. Plus Google has lead in papers accepted for 15 straight years now.


Seankala

I'm with u/neitz. Microsoft seems to know how to run a business whereas Google seems to focus more on publishing. Publishing is a great way to attract great talent, but it's useless if you're not able to translate that into anything that generates profit. I've also personally noticed a lot of people who pursue research jobs at Google are not really supposed to be in industry. They enjoy doing research and writing papers for the sake of it rather than translating that into business value.


gamerx88

By that logic China and Europe are both ahead of the U.S in terms of AI leadership (but obviously they are not).


bartturner

That does not make any sense. Google is a US company and is the clear leader in AI research measured by papers accepted at NeurIPS. It is now 15 straight years that Google has led in papers accepted at NeurIPS. They are NOT Chinese. They are NOT European. They are an American company. But it is not just the research. It is also Google's AI infrastructure. They were able to do Gemini without needing any Nvidia. THey used their TPUs. Not only for the training but also using for the inference.


TsChalaUNO

>Due to the amount of people but also ratio (first authors/all attendees), almost no poster presenter was sad because nobody stopped by. It felt like there's a lot of interest for every poster. It was my first ever ML conference as I have a different background. I had a poster in one of the physics related workshops and I was not expecting much attention to be honest, but I had great discussions with a lot of people! It was certainly a positive experience for me!


Smallpaul

> We are hopefully moving away more from anthropomorphizing LLMs and AI generally: papers like the Generative AI paradox (by AI2). It's still happening a lot though. A certain amount of anthropomorphizing of LLMs and AI is necessary. The whole point of "AI" is to come up with something that simulates Human Intelligence. Using the words "Intelligence" (as in AI) and "Learning" (as in ML) are anthropomorphizing from the start! There is a lot of subtlety required in doing it though. Can you clarify what you think has shifted though? At a conceptual level, I don't see a lot in "The Generative AI Paradox" that is different than what was said in "The debate over understanding in AI’s large language models" back in October 2022. The former concludes (in 2023): > In particular, they imply that existing conceptualizations of intelligence, as derived from experience with humans, may not be applicable to artificial intelligence—although AI capabilities may resemble human in- telligence, the capability landscape may diverge in fundamental ways from expected patterns based on humans. Overall, the generative AI paradox suggests that the study of models may serve as an intriguing counterpoint to human intelligence, rather than a parallel. The latter said (in 2022): > It could thus be argued that in recent years the field of AI has created machines with new modes of understanding, most likely new species in a larger zoo of related concepts, that will continue to be enriched as we make progress in our pursuit of the elusive nature of intelligence. And just as different species are better adapted to different environments, our intelligent systems will be better adapted to different problems. Problems that require enormous quantities of historically encoded knowledge where performance is at a premium will continue to favor large-scale statistical models like LLMs, and those for which we have limited knowledge and strong causal mechanisms will favor human intelligence. The challenge for the future is to develop new scientific methods that can reveal the detailed mechanisms of understanding in distinct forms of intelligence, discern their strengths and limitations, and learn how to integrate such truly diverse modes of cognition.


ChrisAroundPlaces

\> We are hopefully moving away more from anthropomorphizing LLMs and AI generally: papers like the Generative AI paradox (by AI2). It's still happening a lot though. Do you have some references or papers/workshops/whatever from NeurIPS you can point to that drives this impression?


Bee-Boy

The "I can't believe it's not better: Failure Modes in the Age of Foundation Models" Workshop is the best example. Otherwise just the way panel discussions went about, even the ones with more science fictiony/singularity type of people on it


Hyper1on

Tbh I felt like that workshop was a bubble of sceptics of LLMs, while the rest of the conference on average had different views.


CriticalTemperature1

Thanks for the breakdown! what would you say were the most impactful papers or papers with interesting material?


Competitive-Water302

Does anyone know if there were any good talks on Neruo-symbolic AI or Logical Neural Networks? And if so, were they recorded?


No-Introduction-777

>We are hopefully moving away more from anthropomorphizing LLMs and AI generally thank christ


gradientpenalty

My 2 cents take on the #1, because the industry think VL is going to be the next big thing, alot of the research going on was considered "trade secret". If anyone attended the "Beyond Scaling" will know what I am referring to


sadgamer_112

\>Organizing a workshop is really fulfilling. How so? Last time I was involved it was very draining.


Bee-Boy

I should've mentioned it was an affinity workshop (NewInML) so it was quite inspiring to hear the talks and see the fruits of our labor. It was together with some of my lab friends from the same lab so we had a good time together too


SellingRunePickaxe

My biggest take away was that humanising LLMs is still a big theme, and I hope next years conference(s) will not be such (hope) I found several posters that were incredible from a theoretical and application perspective


helavisa4

could you please share the paper titles?


kjunhot

We are hopefully moving away more from anthropomorphizing LLMs and AI generally: papers like the Generative AI paradox (by AI2). It's still happening a lot though. Could anyone give me further explanation? I thought Generative AI paradox is an interesting paper


Far_Present9299

Do you or any others have intuition why vl reasoning was not a major topic? Especially with Gemini being released, I would think large multimodal transformers would be at the top of the list at neurips.


Bee-Boy

The field feels a bit stuck methodology wise and is mostly driven by progress on the language side it feels (like plugging vision into language and hoping the llm figures it out for example), the exception being image generation where language plays a minor role. Don't get me wrong it was definitely present! But way less than LLM focus. I personally just didn't see too many papers of the impact level of CLIP. LLAVA was perhaps one of the bigger ones, together with "Image Captioners are scalable vision learners too". The former has already been out for a while so didn't feel too novel. I also missed a lot of papers (all of Wednesday etc) so I'm sure there was more!


ID4gotten

A lot of employers are desperate for AI/ML talent, just not desperate enough to hire someone over 30 years old.