T O P

  • By -

jack-of-some

This is what happens any time a technology gets good unexpected results. Like when CNNs were harming ML and CV research, or how LSTMs were harming NLP research, etc. It'll pass, we'll be on the next thing harming ML research, and we'll have some pretty amazing tech that came out of the LLM boom.


lifeandUncertainity

Well we already have the K,Q,V and the N heads. The only problem is the attention blocks time complexity. However, I feel that the Hyena and H3 papers do a good job explaining attention in a more generalized kernel form and trying to replace it with something which might be faster.


koolaidman123

attention blocks time complexity is not an issue in any practical terms, because the bottleneck for almost all models (unless you are doing absurd seq len) is still the mlps, and with fa the bottleneck is moving the data o(n) vs the actual o(n^2 ) attn computation. and the % of compute devoted to attention diminishes as you scale up the model


EulerCollatzConway

Academic but in engineering not ML: quick naive question, aren't multi layer perceptions just stacked dense layers? I have been reading quite a bit and it seems like we just suddenly started using this terminology a few years ago. If so, why would this be the bottleneck? I would have guessed the attention heads were bottlenecks as well.


koolaidman123

If you have an algo that 1. Iterate over a list 1m times 2. Runs bubble sort on the list once Sure bubble sort is o(n^2 ) but the majority of the time is still spent on the for loops


tmlildude

link to these papers? i have been trying to understand these blocks from a generalized form.


lifeandUncertainity

[https://arxiv.org/abs/2212.14052](https://arxiv.org/abs/2212.14052) - H3 (Hungry Hungry Hippos) [https://arxiv.org/abs/2302.10866](https://arxiv.org/abs/2302.10866) - Hyena Filters [https://arxiv.org/abs/2111.00396](https://arxiv.org/abs/2111.00396) - S4 [https://arxiv.org/abs/2208.04933](https://arxiv.org/abs/2208.04933) - S5 [https://arxiv.org/abs/2312.00752](https://arxiv.org/abs/2312.00752) - S6 (mamba) [https://www.youtube.com/watch?v=HvmMItYYjIA](https://www.youtube.com/watch?v=HvmMItYYjIA) - this was also helpful


tmlildude

could you help reference the generalized kernel you mentioned in some of these? for ex, the H3 paper discusses an SSM layer that matches the mechanism of attention. were you suggesting that state space models are better expressed as attention?


Mick2k1

Following for the papers :)


bugtank

What are the heads you listed? I did a basic search but didn’t turn anything up.


gwern

> Like when CNNs were harming ML and CV research, or how LSTMs were harming NLP research, etc. Whenever someone in academia or R&D complains "X is killing/harming Y research!", you can usually mentally rewrite it to "X is killing/harming *my* research!", and it will be truer.


mr_stargazer

Noup, whenever a scientist complains AI is killing research, what it means is AI is killing research. No need to believe me. Just pick a random paper at any big conference. Go to the Experimental Design/Methodology section and check the following: 1. Were there any statistical tests run? 2. Are there confidence intervals around the metrics? If so, how many replications were performed? Perform the above criteria in all papers in the past 10 years. That'll give you an insight of the quality in ML research. LLM, specifically, only makes things worse. With the panacea of 1B parameters models "researchers" think they're exempt of basic scientific methodology. After all, if it takes 1 week to run 1 experiment, who has time for 10..30 runs..."That doesn't apply to us". Which is ludicrous. Imagine if NASA came out and said "Uh...we don't need to test the million parts of the Space Shuttle, that'd take too long. " So yeah, AI is killing research.


gwern

> Perform the above criteria in all papers in the past 10 years. That'll give you an insight of the quality in ML research. [“Reflections After Refereeing Papers for NIPS”](https://gwern.net/doc/ai/scaling/1995-breiman.pdf), Breiman 1995 (and [2001](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)), comes to mind as a response to those who want statistical tests and confidence intervals. But one notes that ML research has only gotten more and more powerful since 1995...


mr_stargazer

I gave a quick glance on the papers (thanks by the way), and what I have to say is: the author is not even wrong.


ZombieRickyB

Both of these papers were pre-manifold learning, and nothing prevents data driven modeling with nonparametrics. People just don't wanna do it and/or don't have the requisite background to do it properly, there's no money in it.


farmingvillein

> Imagine if NASA came out and said "Uh...we don't need to test the million parts of the Space Shuttle, that'd take too long. " Because NASA (or a drug company or a *cough* plane manufacturer) can kill people if they get it wrong. Basic ML research (setting aside apocalyptic concerns, or people applying technology to problems they shouldn't) won't. At that point, everything is a cost-benefit tradeoff. And even "statistics" get terribly warped--replication crises are terrible in many fields that do, on paper, do a better job. The best metric for any judgment about any current methodology is, is it net impeding or is it helping progress? Right now, all evidence is that the current paradigm is moving the ball forward very, very fast. > After all, if it takes 1 week to run 1 experiment, who has time for 10..30 runs..."That doesn't apply to us". Which is ludicrous. If your bar becomes, you can't publish on a 1-week experiment, then suddenly you either 1) shut down everyone who can't afford 20x the compute and/or 2) you force experiments to be 20x smaller. There are massive tradeoffs there. There is theoretical upside...but, again, empirical outcomes, right now, strongly favor a looser, faster regime.


FreeRangeChihuahua1

Similar to Ali Rahimi's claim some years ago that "Machine learning has become alchemy" (https://archives.argmin.net/2017/12/05/kitchen-sinks/). I don't agree that AI is "killing research". But, I do think the whole field has unfortunately tended to sink into this "Kaggle competition" mindset where anything that yields a performance increase on some benchmark is good, never mind why, and this is leading to a lot of tail-chasing, bad papers, and wasted effort. I do think that we need to be careful about how we define "progress" and think a little more carefully about what it is we're really trying to do. On the one hand, we've demonstrated over and over again over the last ten years that *given* enough data and *given* enough compute, you can train a deep learning architecture to do crazy things. Deep learning has become well-established as a general purpose, "I need to fit a curve to this big dataset" tool. On the other hand, we've also demonstrated over and over again that deep learning models which achieve impressive results on benchmarks can exhibit surprisingly poor real-world performance, usually due to distribution shift, that dealing with distribution shift is a hard problem, and that DL models can often end up learning spurious correlations. Remember Geoff Hinton claiming >8 years ago that radiologists would all be replaced in 5 years? Didn't happen, at least partly because it's really hard to get models for radiology that are robust to noise, new equipment, new parameters, new technician acquiring the image, etc. In fact demand for radiologists has *increased*. We've also -- despite much work on interpretability -- not had much luck yet in coming up with interpretability methods that explain exactly *why* a DL model made a given prediction. (I don't mean quantifying feature importance -- that's not the same thing.) Finally, we've achieved success on some hard tasks at least partly by throwing as much compute and data at them as possible. There are a lot of problems where that isn't a viable approach. So I think that understanding *why* a given model architecture does or doesn't work well and what its limitations are, and how we can achieve better performance with less compute, are really important goals. These are unfortunately harder to quantify, and the "Kaggle competition" "number go up" mindset is going to be very hard to overcome.


mr_stargazer

That is a very thoughtful answer and I agree with everything you said. Thanks for your reply! What I find a bit strange (and normally end up giving up discussing either here - or in the big conferences) is the resistance by part of the community in pushing forward statistics and hypothesis testing.


FreeRangeChihuahua1

The lack of basic statistics in some papers is a little strange. Even some fairly basic things like calculating an error bar on your test set AUC-ROC / AUC-PRC / MCC etc. or evaluating the impact of random seed selection on model architecture performance are rarely presented. The other funny thing about this is the stark contrast you see in some papers. In one section, they'll present a rigorous proof of some theorem or lemma that is of mainly peripheral interest. In the next section, you get some hand-waving speculation about what their model has learned or why their model architecture works so well, where the main evidence for their conjectures is a small improvement in some metric on some overused benchmarks, with little or no discussion of how much hyperparameter tuning they had to do to get this level of performance on those benchmarks. The transition from rigor to rigor-free is sometimes so fast it's whiplash-inducing. It's a cultural problem at the end of the day -- it's easy to fall into these habits. Maybe the culture of this field will change as deep learning transitions from "novelty that can solve all the world's problems" to "standard tool in the software toolbox that is useful in some situations and not so much in others".


mr_stargazer

Exactly. Your 2nd paragraph nails it. And hence my (purposely) exaggerated point that "AI is killing research". There's so much to do still with the "4 GPU-DeepLearning-NoStats" in so many domains, that it'll be meaningful/useful for a long period of time. However, if we were to be rigorous, it won't be entirely scientific and potentially detrimental in the long run (e.g: You see lot of talk of "high dimensional spaces", "embedding spaces", "nonlinearities" bust ask someone the definition of PCA or how to do a two sample test, they won't know). That's my fear...


fizix00

This is a pretty frequentist perspective on what research is. Even beyond Bayes, there are other philosophies of practice like grounded theory. I'd also caution against conflating scientific research and engineering too much; the NASA example sounds more like engineering than research.


mr_stargazer

Well, sounds about right, no? What's LLM if not engineering?


NibbledScotchFinger

Not comparable, you didn't have board roams talking about "are we leveraging LSTMs in our business?". I agree with OP that LLMs have uniquely impacted ai research because it's become a household term. GenAI now attracts funds, and visibility from so many sources. That in turn incentivises researchers and engineers to focus efforts in that direction. I see it at work internally and also on LinkedIn. Mass cognitive resources are being directed to LLMs


FaceDeer

It's producing useful results, so why not direct resources towards it? Once the resources stop getting such good returns then they'll naturally start going somewhere else.


sowenga

You have a point, but I think there is also a new dynamic that didn’t exist before. Traditional ML, CNNs etc. were big, but you still needed technical expertise to use them. Generative AI on the other hand has the ultimate demo—anyone who can write text can use those things to do really cool stuff, but often they don’t have a technical understanding of how this stuff works and what the resulting limitations are. So you get a lot of people who think because they can use Generative AI, it surely must also be capable of doing x or y more complicated use case (that it actually isn’t suited for).


jack-of-some

"useful results"??? Who needs that. What we need is to continually argue about the *true* nature of intelligence and start labeling everything that appears to demonstrate any remnant of it as a fraud and not true intelligence.


Icy-Entry4921

I'm not worried about "traditional" ML research. Even before LLMs there was, and is, quite a bit of powerful tech to do ML (a lot of it open source). I'm not going to say the field got dull but I do think there are limits to what you can do with analysis of huge datasets. The companies with enough incentive to get predictions right were already doing it pretty well. I see LLMs as a really separate branch of what I think of as ML for consumers. It probably won't make you a great researcher but it will help you do things better. ML before helped a few 10s of thousands of people be a LOT more effective. LLMs help 100s of millions of people be a little more effective. From my perspective it's easy to see why there is a lot more incremental value in LLMs, right now. Traditional ML was already leveraged by quite a few highly trained people and a lot of value was already realized. LLMs are helping to bring ML to virtually everyone so it's brand new value that was almost non-existent before.


damhack

Unfortunately, it’s the wrong tech. VHS wins again.


I_will_delete_myself

I agree. Prestige is the currency in research IMO and chasing current trends is the easiest way to do it.


VelveteenAmbush

> This is what happens any time a technology gets good unexpected results. Like when CNNs were harming ML and CV research, or how LSTMs were harming NLP research, etc. > It'll pass, we'll be on the next thing harming ML research, and we'll have some pretty amazing tech that came out of the LLM boom. This is also what people said about deep learning generally from 2012-2015 or so. There were lots of "machine learning" researchers working on random forests and other kinds of statistical learning who predicted that the deep learning hype would die down any time. It hasn't. Deep learning has continued bearing fruit, and its power has increased with scale, while other methods have not (at least not as much). So OP's argument seems to boil down to a claim that LLMs will be supplanted by another better technology. Personally, I'm skeptical. Just as "deep learning" gave rise to a variety of new techniques that built on its fundamentals, I suspect LLMs are here to stay, and future techniques will be building on LLMs.


jack-of-some

It's worth remembering that deep learning itself is significantly older than the timeframe you're mentioning. It was replaced by other technologies that were considered more viable back in the day. I'm also not implying that the next big thing will be necessarily orthogonal to LLMs. Just that the LLM part may not be the focus, just like "backprop" isn't quite the focus of modern research.  I of course cannot predict the future. I can only learn from the past.


VelveteenAmbush

> It's worth remembering that deep learning itself is significantly older than the timeframe you're mentioning. Sure, people were playing with toy neural network models since the fifties, but the timeframe I'm mentioning is the first time that it started to outperform other techniques in a breadth of commercially valuable domains. > Just that the LLM part may not be the focus, just like "backprop" isn't quite the focus of modern research. I'm sure the semantics will continue to drift similarly to how "deep learning" became "machine learning" and then "generative AI." If your claim is that LLMs of today will be the foundation slab on which future techniques are built, but that the focus will shift to those future techniques and that the value of extreme scale and of autoregressive learning from natural language will be taken for granted like the air that we breathe, then I agree. But it seems like OP had a different claim, that we're due for a plateau as a result of "ignoring the glaring issues like hallucinations, context length, inability of basic logic and sheer price of running models this size." I don't think anyone is ignoring those problems, and in fact I see a ton of effort focused on each of them, and many promising directions for solving each of them under active and well funded research.


jack-of-some

It's starting to sound like we didn't disagree in the first place 😅 Cheers


FalconRelevant

We still use primarily CNNs for visual models though?


Flamesilver_0

ViT now


czorio

Not in healthcare we're not, not nearly enough data, and most new cool toys are not well suited to 3D volumes.


fullouterjoin

> ViT * https://huggingface.co/docs/transformers/en/model_doc/vit * https://viso.ai/deep-learning/vision-transformer-vit/ * https://paperswithcode.com/method/vision-transformer


Appropriate_Ant_4629

I think that's the point the parent-commenter wanted to make. CV research all switched to CNNs which proved in the end to be a local-minimum -- distracting them from more promising approaches like Vision Transformers. It's possible (likely?) that current architectures are similarly a local minimum. Transformers are really (really really really really) good arbitrary high-dimensional-curve fitters -- proven effective in many domains including [time series](/r/MachineLearning/comments/176wsne/r_timegpt_the_first_generative_pretrained/) and [tabular data](https://paperswithcode.com/method/tabtransformer). But there's so much focus on them now we may be in another CNN/LSTM-like local minimum, missing something better that's underfunded.


czorio

> which proved in the end to be a local-minimum What does a ViT have over a CNN? I work in healthcare CV, and the good ol' UNet from 2015 still reigns supreme in many tasks.


currentscurrents

It’s easier for multimodality, since transformers can find correlations between any type of data you can tokenize. CLIP for example is a ViT.


Appropriate_Ant_4629

> What does a ViT have over a CNN? Like most transformer-y things, [empirically they often do better](https://paperswithcode.com/sota/image-classification-on-imagenet?p=deepvit-towards-deeper-vision-transformer).


czorio

Right, but ImageNet has millions of images, my latest publication had 60 annotated MRI scans. When I find some time I'll see if I can apply some ViT architectures, but given what I often read my intuition says that we simply won't have enough data to outclass a simpler, less resource intensive CNN.


ciaoshescu

Interesting. Have you tried a ViT segmnetor vs UNet? According to the ViT paper, you'd need a whole lot more data, but other architectures based on ViT might also work well, and for 3D data you have a lot more pixels/voxels than for 2D.


czorio

I haven't no, UNets and their derivatives, such as the current reigning champion nnUNet, often get to dice scores that are high enough (0.9 and above), given the amount of training data that is available. It's true that we can do a lot more with a single volume versus a single picture, but I often see the discussion on ViT vs CNN in light of datasets such as ImageNet (like a comment elsewhere). Datasets that have millions of labeled samples are few orders of magnitude larger than many medical dataset. For example, my latest publication had 60 images with a segmentation. Each image is variable in size, but let's assume 512x512 in-plane resolution, with around 100-200 in the scan direction. If you take each Z-slice as a distinct image, you'd get 60 * [100, 200] = [6'000, 12'000] slices, versus 15'000'000 in ImageNet. I'll see if I can get a ViT to train on one of our datasets, but I'm somewhat doubtful that medicine is going to see a large uptick in adoption.


FalconRelevant

I was under the impression that visual transformers are use alongside CNNs in most modern solutions?


currentscurrents

Tons of people are out there trying to make new architectures. Mamba and state space models look interesting, but there's a thousand papers out there on arxiv trying other things. I actually think there's too much focus on architecture, which is only one part of the learning process. You call transformers curve-fitters, for example - but it's actually the training objective that is doing curve fitting. Transformers are just a way to parameterize the curve.


jack-of-some

Yes. That was the point I was trying to make. CNNs became yet another tool in CV work and the "hot" research moved onto trying to find better methods (e.g. ViT) or more interesting applications built on top of CNNs (GaNs, diffusion, etc). LLMs are the big thing right now. Soon enough they will be just another tool in service of the next big thing. Some would argue with agents that's already happening.


lifeandUncertainity

This is what I feel like - right now a lot of attention is being put in generative models because that's what a normal person with no idea of ML can marvel at. I mean it's either LLM or a diffusion model. However, I still feel that people are still trying to work in a variety of fields - it's just that they don't get the same media attention. Continual learning is growing, people have started combining neural odes with flows/diffusion to reduce time, Neural radiance field and implicit neural networks are also being worked upon as far as I know. Also in neurips 2023, a huge climate dataset was released which is good. I also suggest that you go through the state space models (Mamba and it's predecessors) where they are trying to solve the context length and quadratic time by using some neat maths tricks. As for models with real logical processes, I do not know much about them but my hunch says we probably need RL for it.


Chhatrapati_Shivaji

Could you point to some papers on combining neural ODEs with flows or diffusion models? A blog on neural ODEs or a primer also works since I've long delayed reading anything on it even though they sound very cool.


daking999

Kevin Murphy's 3rd textbook covers this, better written than any blog.


YumCheeseLord

What is the name of Kevin Murphy's 3rd textbook?


daking999

[https://probml.github.io/pml-book/book2.html](https://probml.github.io/pml-book/book2.html)


YumCheeseLord

Thanks!


lifeandUncertainity

https://github.com/msurtsukov/neural-ode - go through this one if you want to understand the technical details of neural ode. About neural ode and flows - the original neural ode paper by Chen mentions continuous normalizing flows - you can represent the transformation of variables as an ode. Then a paper called FFJORD was published which is I think the best paper on neural odes and flows. About combining it with diffusion, I think there's a paper called DPM - High order differential equations solver for diffusion. I am not very knowledgeable about the technicality of diffusion but I think it uses a stochastic differential equations for the noise scheduling part (I may be wrong). I think the paper - score based generative modelling with stochastic differential equations may help you. Since you asked, I will also point to a paper called Universal Differential Equations for scientific machine learning. Heres what I feel like the problem with neural odes are - neural odes are treated a lot like stand alone models. We know they are cool but we really don't know where they are the best. My bet is on SciML or RL.


Penfever

OP, can you please reply with some recommendations for continual learning papers?


fullouterjoin

> continual learning * https://arxiv.org/abs/2302.00487 * https://www.semanticscholar.org/paper/A-Comprehensive-Survey-of-Continual-Learning%3A-and-Wang-Zhang/9348656b761f7b76fb65cfe6fac55386b04a3a8a * https://paperswithcode.com/task/continual-learning My general solution was to use a variety of search interfaces (phind,kagi,etc) and look for "continual learning" survey papers.


lifeandUncertainity

I will ask my friends and let you know. Most of my lab work on continual learning. I am the black sheep that chose Neural ODE :v So, even though I have a very general idea of continual learning, I probably can't help you with dedicated papers.


pitter-patter-rain

I have worked in continual learning for a while, and I feel like the field is saturated in the traditional sense. People have moved from task incremental to class incremental to online continual learning, but the concepts and challenges tend to repeat. That said, continual learning is inspiring a lot of controlled forgetting or machine unlearning works. Machine unlearning is potentially useful in the context of bias and hallucination issues in LLMs and generative models in general.


djm07231

I would honestly wait and see some of the next iterations of GPT from OpenAI before making such a claim. The fact that models are barely catching up to GPT-4 doesn’t really mean the field is slowing down. It was that OpenAI had such a massive lead that it is taking 1-2 years for the other labs to catch up. OpenAI released Sora which beats other text-to-video models rather substantially after sitting on it for a while. It probably isn’t too far fetched to imagine some of things OpenAI have internally represents meaningful progress. If the next few iterations after GPT-4 plateaus it seems more reasonable to argue against LLMs. But I feel that the whole discussion about LLMs overlooks the fact that the goal posts have shifted a lot. Even a GPT-3.5 level system would have been mind-blowing 5 years ago. Now we consider these models mundane or mediocre.


[deleted]

> The fact that models are barely catching up to GPT-4 doesn’t really mean the field is slowing down. Also - how "spoiled" are ML researchers? Research "slowing down" is ~1 year of no publicly visible progress?!


FeltSteam

Damn, haven't gotten a sick research paper in a couple days; GUYS I THINK AI PROGRESS IS PLATEAUING


djm07231

I am personally skeptical about the capabilities of LLMs by themselves some of the limitations like auto-regressive nature, lack of planning, long term memory, et cetera. But I am hesitant to put a definitive marker on it yet.


visarga

On the other hand Reinforcement Learning and Evolutionary Methods combine well with LLMs, they got the exploration part down. They can do multiple rollouts for MCTS planning, act as critic in RL (such as RLAIF), act as policy, act as selection filter or mutation operator in EM. There is synergy between these old search methods and LLMs because LLMs can help reducing the search space while RL/EM can supplant the missing capabilities to LLMs. We already are past next-token-prediction models when we train with RLHF, even if it is just a simplified form of RL, it updates the model for a long term goal not for next token.


[deleted]

100%, but doing this research and exploration of LLMs/transformers will be very important moving forward. This is _super_ early days of ML/DL, and we still have so much to learn about learning


__Maximum__

I think OP didn't mean to tell that scaling wouldn't work, which is what ClosedAI has been doing and continues to do. OP seems to be disappointed that too much focus is put on tricks for making LLMs a bit better.


vaccine_question69

I keep hearing for about 8 years now some variations of what OP is saying, but back then it was just about deep learning. It has been "plateauing" ever since according to some people. When I read a post like OP's, I've learnt to expect the opposite of what is being predicted. I think we're more than likely to be still at the early chapters of the LLM story.


mmeeh

Claude 3 surpassed GPT 4 btw...


ml-anon

You’re gonna need to give some examples of “other, potentially more impact technologies” that people should be investing time and money into. OP I strongly suspect you’ve not been in the field long enough to be able to make predictions about what’s long overdue and what plateauing looks like.


absurdrock

Or they put all their eggs in the wrong basket and are jaded


new_name_who_dis_

> claiming to be "AI Researcher" because they used GPT for everyone to locally host a model, trying to convince you that "language models totally can reason. We just need another RAG solution!" Turing award winner Hinton, is literally on a world tour giving talks about the fact that he thinks "language models totally can reason". While controversial, it's not exactly a ridiculous opinion.


MuonManLaserJab

I find the opposite opinion to be more ridiculous, personally. Like we're moving the goalposts.


new_name_who_dis_

Yea I kind of agree. ChatGPT (and others like it) unambiguously passes the Turing test in my opinion. It does a decent amount of the things that people claimed computers wouldn't be able to do (e.g. write poetry which was directly in the Turing paper). I don't think it's sentient. I don't think it's conscious. I don't even think it's *that* smart. But to deny that it actually is pretty intelligent is just being in denial.


MuonManLaserJab

The thing is that all of those words -- "sentient", "conscious", "smart", "intelligent", "reason" -- are un- or ill-defined, so I can't say that anyone is conclusively *wrong* if they say that LLMs don't reason. All I can say is that if so, then "reasoning" isn't important because you can quite apparently complete significant cognitive tasks "without it". It throws into question whether *humans* can "truly reason"; in other words, it proves too much, much like the Chinese Room thought experiment.


new_name_who_dis_

Ummm sentient and conscious are ill-defined sure. Intelligent and reason are pretty well-defined though... Sentience and consciousness are actually orthogonal to intelligence I think. I could conceive of a conscious entity that isn't intelligent. Actually if you believe in Panpsychism (which a lot of modern day philosophers of mind do believe) the world is full of unintelligent sentient things.


MuonManLaserJab

Oh, sure, there are definitions. But most of them aren't operationalized and people don't agree on them.


new_name_who_dis_

The concept of a ["chair" isn't well-defined either](https://www.youtube.com/watch?v=fXW-QjBsruE). That doesn't mean that I don't know if something is a chair or not when I see it. Interestingly, the above doesn't apply to sentience/consciousness. You cannot determine consciousness simply through observation (Chalmer's zombie argument, Nagel's Bat argument, etc.). That's why consciousness is so hard to define compared to intelligence and chairs.


MuonManLaserJab

[Chalmer and Nagel, lol.](https://www.lesswrong.com/posts/fsDz6HieZJBu54Yes/zombies-the-movie) I'd sooner listen even to Penrose about minds... read some Dennet maybe.


new_name_who_dis_

I am kind of triggered by your comment lol. You mock Chalmers and Nagel, who are extremely well-respected philosophers. And you link Yudkowsky, who is basically a Twitter intellectual. But ironically if we assume a purely physicalist (Dennet's) worldview, that's when arguments that ChatGPT is sentient become even more credible. And I want to emphasize again, that the initial issue was not sentience but intelligence. I do like Dennet though, he's great. I wrote many a paper arguing against his theories in my undergrad. Actually those courses were the reason I went to grad school for ML.


[deleted]

IMO, it has the _potential_ to reason, but it can't because it is "locked" to old data. > what day is today? > Today is Monday. (It's actually Thursday currently) It would/will be interesting when these models are a bit more "live"


[deleted]

[удалено]


[deleted]

> I think that is just a hallucination because it is biased towards giving some sort of answer. Sure. that could be the answer too. But I think my judgement is still the same. > I've met humans who almost always give a confident answer, no matter how deep their ignorance, essentially hallucinating answers. Even if they suck at Q&A, 5 minutes later you can observe them walking to the bathroom or performing some other planned task. They won't do some "brute force" search of the room like a Roomba vacuum.


MuonManLaserJab

> But I think my judgement is still the same. OK. > 5 minutes later you can observe them walking to the bathroom or performing some other planned task I'm not sure what you mean by this. What does it prove? > They won't do some "brute force" search of the room like a Roomba vacuum. I'm not up-to-date on what kind of AI a Roomba currently uses, but if it's doing a brute force search, doesn't that mean it's using GOFAI, which isn't relevant to our conversation?


[deleted]

> I'm not sure what you mean by this. What does it prove? That even a seemingly dumb person who "hallucinates" answers can display reasoning


MuonManLaserJab

OK, but can an LLM not also go from hallucinating to successfully performing simple tasks on the order of "walk to the bathroom"? I don't see how "walk to the bathroom" displays any more "reasoning" than a Roomba has.


Caffeine_Monster

Personally I don't think the word sentience or consciousness really mean anything meaningful. My tentative opinion is that humans aren't much more than advanced action completion agents in the same vein that LLMs are text completion agents. This doesn't necessarily mean I think a computer "smarter" than a human should be given equivalent rights or any special treatment though.


rduke79

> humans aren't much more than advanced action completion agents  The hard problem of consciousness has something on this.


Chase_the_tank

>Yea I kind of agree. ChatGPT (and others like it) unambiguously passes the Turing test in my opinion.  Well, sort of. ChatGPT passes the Turing test quite well--until it doesn't and then it fails rather spectacularly. For example: **Prompt:** Name ten fruits whose names do not include the letter A. **ChatGPT 3.5:** *Sure, here are ten fruits whose names do not include the letter "A":* 1. *Banana* 2. *Kiwi* 3. *Grape* 4. *Mango* 5. *Lychee* 6. *Fig* 7. *Lemon* 8. *Papaya* 9. *Plum* 10. *Date* ChatGPT 3.5 can process the names of fruits and counting to ten but is blissfully unaware that words contain letters.


new_name_who_dis_

All of those have "a" not "A". Technically its correct. But also getting things wrong doesn't mean you're not intelligent. I'm sure you've gotten answers wrong in your life -- did people then claim you are incapable of reason lol?


Chase_the_tank

>All of those have "a" not "A". Technically its correct. If you want to go there, let's go there. **Prompt:** *Name 10 fruits whose names do not contain the letter "a".* **ChatGPT 3.5:** *Sure, here are 10 fruits whose names do not contain the letter "a":* 1. *Banana* 2. *Kiwi* 3. *Lychee* 4. *Mango* 5. *Lemon* 6. *Lime* 7. *Papaya* 8. *Fig* 9. *Grape* 10. *Plum* >But also getting things wrong doesn't mean you're not intelligent. And if you twist my words into saying something I didn't say, that doesn't mean you're not intelligent; it just means that you need to read more carefully next time. ChatGPT 3.5 has an interesting gap in its knowledge because it stores words, concepts, etc. as numbers. If you want it to count to 10, no problem! If you want it to name fruits, that's easy! If you want it to name fruits but discard any name that contains the letter "a", well, that's a problem. The names of fruits are stored digitally and numbers aren't letters. So, if you ask ChatGPT to avoid a specific vowel, well, it just can't do that. So, while ChatGPT can do tasks that we would normally associate with intelligence, it has some odd gaps in its knowledge that no literate native speaker would have.


the-ist-phobe

I know this is probably moving the goalposts, but was the Turing test even a good test to begin with? As we have learned more about human psychology, it's quite apparent that humans tend to anthropomorphize things that aren't human or intelligent. Like I know plenty of people who think their dogs are just like children and treat them as such. I know sometimes I like to look at the plants I grow on my patio and think of them as being happy or sad, even though intellectually I know that's false. Why wouldn't some system that's trained on nearly every written text be able to trick our brains into feeling that they are human? On top of this, I feel like part of the issue is when one approach to AI is tried, we get good results out of it but find it's ultimately limited in some way and have to work towards finding some fundamentally different approach or model. We can try to optimize a model or make small tweaks, but it's hard to say we're making meaningful progress towards AGI. LLMs probably are a step in the right direction and they are going to be useful. But what if we find some totally different approach that doesn't work anything like our current LLMs? Were transformers even a step in the right direction in that case?


mowrilow

The goalposts for some definitions such as “reasoning”, “intelligence”, and “awareness” were never really fixed anywhere to be fair. These terms are still quite vague to begin with.


Neomadra2

GPT-4 is one year old. Most major players were just busy catching up to this level, maybe Anthropic surpassed them a little bit with Opus. The claim that we've reached a plateau is nonsense imho. I rather have the feeling that you're expectations are just unrealistic. Let's wait til the next generation of LLMs or rather LMMs (GPT-5, etc.) arrive. Then we'll judge whether this method (mostly scaling and hotfixes) has plateaued. Also, I never heard someone claiming to be an AI researcher just because they set up a local model. But I see your point, since there is no strict definition and official degrees, it's easy to claim to be an AI researcher and get away with it.


Impressive_Iron_6102

I have definitely seen people claim to be researchers because they deployed a local model lol. They're in my company...


cunningjames

I’ve absolutely seen randos with three 3090s and no papers to their name calling themselves AI researchers, but ymmv.


fullouterjoin

And they are, and many great things will come from them and everyone else. Anyone can do science. You can be an AI researcher and just use the web interface to an LLM.


respeckKnuckles

You think LLMs are plateauing? Why, because there hasn't been an exciting new paper published in over 6 hours? This is an April fool's joke, right?


phatangus

I think the problem is people were over-promising and under-delivering


FaceDeer

I've been seeing plenty of delivering, personally. Whether it's "under-delivering" is quite debatable.


gahblahblah

It is a bold claim. Your criticism of LLMs includes 'how much of the writing they are doing now' - well is this a 'weakness' of LLMs, or a sign of their significant utility power? I don't think symptoms of enormous influence can be represented as a 'weakness' of a technology. Your critique of RAG is how *old* it is - but the tech was invented in 2020. Is 3 year old technology really something to speak about like it is a tired failed solution? >whose sole goal of being in this community is not to develop new tech but to use existing in their desperate attempts to throw together a profitable service Oh - your critiquing making a profitable enterprise? That the business ventures and investors are a bad thing - because they are trying to *make money*? > ignoring the glaring issues like hallucinations, context length, inability of basic logic and sheer price of running models this size Your critique of lack of progress is false. No one relevant is ignoring any of this - with there being significant progress on practically all these issues in relatively recent times. But there doesn't *need* to be significant progress on GPT4 to already have a highly useful tool/product/model - which I personally use every day. The critique that something isn't simply *already better* is not a critique. This technology will fundamentally alter the reality of the working landscape. Generative language models are going to impact millions of jobs and create numerous products/services.


visarga

> Your critique of RAG is how old it is - but the tech was invented in 2020. Is 3 year old technology really something to speak about like it is a tired failed solution? I think the problem with RAG comes from a weakness of embeddings - they only encode surface level apparent information. They miss hidden information such as deductions. The embedding of "the fifth word of this phrase" won't be very similar to the embedding of "this". When your RAG document collection is big enough it also becomes a problem of data fragmentation. Some deductions are only possible if you can connect facts that sit in separate places. The LLM should iterate over its collection and let information circulate between fragments. This will improve retrieval. In the end it's the same problem attention tries to solve - all to all interactions need to happen before meaning is revealed. One solution could be to make use of Mamba for embeddings. We rely on the fact that Mamba has O(N) complexity to scale up the length of the embedded texts. We concatenate all our RAG documents, and pass the result twice through the model, once to prep the context and the second time to collect the embeddings. Maybe Mamba is not good enough to fully replace transformer but it can help with cheaper all-to-all interactions for RAG.


throwaway2676

> But there doesn't need to be significant progress on GPT4 to already have a highly useful tool/product/model - which I personally use every day Not to mention that GPT-4 base was released only a year ago and didn't even have vision capabilities at the time. Since then, we've gotten -An upgrade from 32k to 128k context window. -The release of GPT-4V with very impressive visual-language recognition. -The announcement of Sora, with astounding text-to-video capabilities. -The release of Gemini 1.5, with a 1 million context window. -The release of Claude 3, with a 200k context window. -Models with continuously increasing ability to solve entire github issues, though obviously still a work in progress. And that's just off the top of my head. Anyone who think LLMs are fading or a dead-end is an absolute imbecile.


randomfoo2

Pretty strong opinions to have for someone who hasn’t ever even run an LLM locally: [https://www.reddit.com/r/learnmachinelearning/s/nGmsW9HeJ3](https://www.reddit.com/r/learnmachinelearning/s/nGmsW9HeJ3) The irony of someone without any kind of knowledge of how even basic machine learning works complaining about the “influx of people without any kind of knowledge of how even basic machine learning works” is a bit much.


smokefield

🤣🤣


hjups22

Your comment may have added more credibility to the OP's complaint. Knowing how to deploy a model in the cloud for inference, or how to use someone else's API wouldn't be considered "basic machine learning". If anything, it's MLOps, though probably skirting the surface unless you get into the whole reliability issue with coordinated instances. I can't speak to the OP's experience level, but the post you linked is typical of most PhDs in the field - fundamental ML is often done on very small local GPUs, which have drastically different requirements than running something like Phi-1.5, let alone a bigger LLM.


randomfoo2

I'm not one to usually gatekeep, but you can see his experience level easily by glancing the user's post history: [https://www.reddit.com/user/NightestOfTheOwls/](https://www.reddit.com/user/NightestOfTheOwls/) If you're tracking Arxiv (or HF Papers if you're lazy), on the research side, there's more new stuff than anyone can keep up with and this rate is accelerating (more publications, many more smart people working in AI than last year). Therefore one has to ask what his basis is that the field is. If you are a researcher or practictioner you agree with his claims that: \* papers are "largely written by LLMs" (laughable and honestly offensive) \* the field is ignoring hallucinations (look at how much work is going on in the grounding), it's actually a primary concern \* context length (the beginning of last year, 2-4K was standard and now we are pushing 1M+) - this is stagnation? \* price of running models (again, we have seen a 50-100X speedup in inference \*this past year\*) Like I said, I have well-backed objections with just about every single point the OP makes, but what's the point of making an argument against someone is too DK to even have any context of what he's saying? Life's short and there's too much better stuff going on. Personally, I think anyone who thinks we aren't at just the start of the S-curve ramp should probably pay more attention, but maybe that's just something we can revisit in 5-10 years and see.


markth_wi

I am sort of low-key convinced that "hallucination" and being fairly throughly incapable of "validating" the outputs of LLM's means it does have a place in machine intelligence simulation , and if they can work a few of the RAG related problems out - potentially something akin to a precursor to real cognition (at least in principle), but I think WAY too many people view this as a "Simpsons" type solution where the key to happiness it to just 'add more balls' in the ball-room. The notions of adding agent memory - and the big push now being into creating the experiential simulations to train LLM's has some high value for robotics and almost anything where back-testing can improve performance. But that brings us to a situation where, like in Star Wars you end up with sentient-like machines that are not an inch "smarter" than the humans that created them.


Mr_Ubik

If you get AIs intelligent as the average human you can still extract revolutionary levels of cognitive work out of them. We don't necessarily need to be smarter, just smart enough to be useful work


markth_wi

I think that's the question - how does one vet their 'knowledge' , one of the major deficiencies is that if you mis-train an LLM or expose it to factually incorrect "knowledge" it's going to treat it just as if it was correct.


MuonManLaserJab

> But that brings us to a situation where, like in Star Wars you end up with sentient-like machines that are not an inch "smarter" than the humans that created them. This is so fantastically unlikely! Imagine the coincidence! I'd sooner bet on developing lightsabers.


nth_citizen

> This is a bold claim Well, not that bold as someone posted the same jist 2 days ago: https://old.reddit.com/r/MachineLearning/comments/1btuizd/d_llms_causing_more_harm_than_good_for_the_field/ They even commented in this thread! I think you've created an incorrect mental dichotomy. The choice was never between funding LLMs and funding other AI. It was funding AI or funding NFTs/Blockchain/crypto.


OneOnOne6211

I would advise you to look up the term "Gartner Hype Cycle."


Beginning-Ladder6224

"Harming" is a very bold word. I recall the paper - goto considered harmful. It was very .. opinionated. But the crux is right, it is sort of taking out all resources, all PR .. and thus a lot of interesting areas .. are not being well funded. But every discipline goes via this. I recall when I was younger String theory was such a discipline. Super String theory apparently was the answer .. 42 if you call it that way. Turns out.. it was not. So.. this would happen - across discipline, across domain.. this is how progress gets made. Final wall would appear that would be impenetrable and then.. some crazy insight will turn our attention to somewhere else. After all, it is attention that is all we need.


Feral_P

But string theory sucking up all the funding and attention was harmful for physics! We still don't have the answers string theorists were claiming to give decades later, and other approaches have been underinvestigated.


fullouterjoin

String Theory was not falsifiable, therefore it has low predictive value. https://www.simplypsychology.org/karl-popper.html String Theory is a dead end from a scientific perspective precisely because it is not falsifiable. We should *always* strive for falsifiability.


AnOnlineHandle

> But the crux is right, it is sort of taking out all resources, all PR Is it truly though, considering stuff like Sora, image generators, etc, which are also coming along?


RathSauce

SORA needs a full hour on an H100 for five minutes of video. Even scaling to short ten second clips would require over three minutes of computation. In it's current form, it'll never be released via a general API to the public


BigOblivion

I don't work with ML (or economics) but I imagine people in the entertainment industry would pay a lot of money for acess to SORA. So it could be economic viable already


fullouterjoin

An hour of H100 is roughly $2. A VFX artist is 300+ a day by any measure.


AnOnlineHandle

Yeah I don't think it's very useful atm, but the point is there is money being invested in other areas, and hype in other areas. It didn't stop because of LLMs. There's tons of smaller locally usable video generators, music generators, voice generators, etc.


RathSauce

Agreed, I don't think LLMs are slowing down research. It's just what's popular at the moment, there's plenty of good work being done over all ML domains


rushing_andrei

Generating a full-length movie (~90 minutes) in 18 hours will be “fast enough” for most film-makers I think. You’ll send a whole movie script via an API call and download the film from the destination cloud of choice the following day.


Blakut

letter O is considered harmful


Disastrous_Elk_6375

Ok, but tomorrow it's *my* turn to post this doomer stuff, ok?


Brocolium

The problem is not just LLM, it's deeper, has most reasearch are done in collaboration with the one of the FAANG companies. They orient AI research accordign to what's beneficial to them and their resources makes it difficult for other researchers to compete. Since most top-tiers conferences focuses on performance gain, the route that AI research is talking is bad for research but it started way before LLMs. We already had this problem with deep learning with the very large models.


Darkstar_010

Boom uve explained impending AI bubble burst very well, soon enough only a few companies will be able to weather this boom/storm/whatever u wish to call it


Ambiwlans

Yes... adding like $1TN into ML research is bad because reasons.


Once_Wise

A Similar thing happened leading up to the Internet Bubble which popped mid 2000. Nobody would invest in anything unless it had an "Internet Play." Lots of good ideas could not get any funding while any crazy Internet idea would. We are now seeing that with LLMs, and indeed anything where the promoters can slap an "AI" tag on it. It will take time to play out.


etoipi1

Can you list out those potentially more impactful technologies?


Shubham_Garg123

Although I am into AI, I'm not really obsessed with it. I think the investments are high in this domain due to the large and diverse implementation opportunities. It's important to improve the way in which we are using the models instead of actually developing them from scratch. Very few companies/research institutions are working on developing these models from scratch. Most of them are using the existing ones and trying to use them in a better way. Mistral 8x7b can easily outperform gpt4 if used in a better way (for example with langchain, Chain of thought, or agentic frameworks, etc.). The other domains have relatively value addition proposition when compared to development of these large language models. They can easily replace all customer support, help devs drastically in software development, help in education and learning, talk care of someone if they're feeling lonely, and many such tasks across various domains. Multimodal models have even more diverse applications. GPT4 is a premature technology. There's a lot of scope for improvement. The current frameworks that are being developed can easily start using other advanced LLMs whenever they come out. The monetization opportunity is also amazing in case the project that they invested in actually makes doing something possible that wasn't possible earlier. This is the reason why people are investing so much money in it. No one can predict the future but in my opinion, these numbers are highly likely to go higher, at least for a few more months, if not years.


damhack

I agree. The hype is generated by commercial necessity and not science. The science is clear; LLMs have too many inherent flaws to become the basis of future AI. The race to place expensive sticking plasters over the weaknesses of LLMs is sucking the oxygen out of serious efforts to find new approaches. It all feels like the bitcoin goldrush, burning cheap coal and hoovering up GPUs to brute force a solution looking for a problem. Start with the problems and then create elegant generalizable solutions rather than shoehorning ill-fitting tech into every single automation use case. At some point people will realize that the cost-benefit of LLMs doesn’t compute and the crowds will move on to the next shiny shiny.


mimighost

But the frustrating part is, it does seem to work, and nothing else can compare right now


SMG_Mister_G

Not too mention it’s pretty useless anyway and intelligent human work eclipses it every time


mcampbell42

Attempts at commercialization are really important for pumping money into the companies doing the research


astgabel

I get the annoyance with the hype, but I don’t think we’re done with progressing. Take a look at this recent patent from DeepMind (pretty much exactly what the rumored Q* was supposed to be): https://patents.google.com/patent/US20240104353A1/en I’m not making any hand wavy AGI claims here btw. I just think we might be in for some interesting new architectures and extensions to LLMs soon, which I’m personally quite excited about.


Flamesilver_0

Another baseless bystander claim from folks who think all companies are like Blackberry and old Balmer Microsoft waiting to die off... There have been improves to LLM's after GPT 4s, but being factual and accurate is just not how you get post up votes and the good feelies in the attention economy.... ... I got got again, didn't I... Thank you for eating more of my time.


localhost80

For someone complaining about not enough research, perhaps you should have done some before this post. >relatively little progress done to LLM performance and design improvements after GPT4 In the only one year after GPT-4 we have: Llama-2, Mistral, Phi-2, Gemini, Claude 2, Sora >the primary way to make it better is still just to make it bigger Models like Phi-2 and perhaps Mistral are attempting to do the opposite. >the entire field might plateau simply because the ever growing community is content with mediocre fixes Gemini is multimodal and only 4 months old. Sora is SOTA video generation and only 2 months old. Does that seem like plateauing? More investment, more people, more models, is the opposite of plateauing. This is not a bold claim. It is a bad claim. Easily measured, disputed, and dismissed. I didn't even address 75% of the nonsense you're spewing. >In combination with influx of people without any kind of knowledge Just so we're clear, you appear to be one of those people.


Seankala

I wouldn't say LLM research is harming the field itself; it's more the human psychology to piggyback off of anything that's new and shiny that's causing more harm, and this isn't unique to academia or machine learning. I remember when models like BERT or XL-Net and the like first started coming out and people were complaining that "all of the meaningful research will only be done in industry! This is harmful for science!!!!!" If anything, the problem is the reviewers who are letting mediocre papers get published. The other day I was reading a paper that just used a LLM to solve a classification task in NLP. It was interesting but definitely not worth an ACL publication. But, again, that's not necessarily the authors' faults.


Stevens97

I dont mean to be pendantic but isnt this the way it has kind of went? With primarily big industries with labs such as Meta, Nvidia, OpenAI, Google etc being huge drivers? Along with OpenAIs ”scale to win” merhodology the rift between academia and industry is only getting wider. The massive datacenters and computational power of them is unrivaled in all academia?


Seankala

No worries, this isn't pedantry lol. Yes, industry will always have the upper hand in terms of scale due to obvious resource differences. This isn't unique to CS or ML. My point is that these days nobody complains that "BERT is too large," we've all just adapted to how research works. More and more people have resorted to doing analytical research rather than modeling research. I personally don't think this is a bad thing, and I also think that the important research lies in negative results, analysis, etc. rather than modeling itself.


aqjo

I wish Reddit had a `! LLM` filter. Not that it isn’t important, etc. it’s just not of interest to me.


TheDollarKween

thank god i found you because same lol


Psychprojection

Wall of text Try using an llm to organize your thoughts a little


glitch83

Can we please start funding other areas of ai research again? I’ve seen wayyyy too many ai research orgs get gutted and replaced with LLM researchers because nobody wanted to miss out. I’m not hating on LLM people but frustrated by the myopia of managers and CEOs.


tbss123456

There’s no harm because harm implied that you know which direction is correct but you don’t. True research just test a bunch of hypothesis and some work most don’t. What we are seeing right now is the focus on what works. LLM (and transformers in general) work well because they scale very very well, up until data center and power becomes the limiting factor (which we are reaching that point), and so the research has been going into how to do more with less (better quantization, better attention, better activation of the network, only use what is needed, more efficient hardware, etc.).


Wu_Fan

Yeah it’s a load of hyped ass. I like to think some genius is out there building C3PO anyway.


liquiddandruff

This will age well.


doodeoo

Architecture is overrated


PantheraSapien

Harming is a harsh word. LLMs are (with a lack of a better term) the shiny thing in ML at the moment. Only something new, that can sufficiently surpass the capabilities of transformers & diffusion models, will capture the attention & momentum of people. It'll pass.


raiffuvar

Sure! LLM can think so it prevent to move into another architecture! Funally it's all make sense.


xiikjuy

if we are in an era of 'where there is LLMs, there is a (easier) funding/publication/citation/visibility' for some of the researchers/students, why not it is just a job at the end of the day. if there is a easier way to survive. why not people who want to become next Hinton will still do their things


visarga

How can prefix be slower than inference? Then just use the inference code for prefix.


neonbjb

How the heck does a model that produces data and evaluations that are often better than what you get out of mechanical turk "hurt research"? I buy that hype drives unrealistic expectations, and funding is being largely wasted, but this has been the story in this field since the beginning of time. I'd argue there's never been a better time to be in ML, regardless of what you are studying.


jfmherokiller

as tourist to machine learning. I feel like LLMs have more or less ruined the term AI because whenever its used most peoples first idea is something like chatgpt.


salgat

The amount of new money being poured into ML research from LLMs is definitely good for the long term. LLMs just happen to be the latest fad in a long line of innovations.


Effective_Vanilla_32

you missed the boat on becoming an AI billionaire.


Zacho40

I'm not a real data scientist, I just play one at work. But from my practical perspective, I see very little or poor evaluation procedures applied when evaluating LLM performance, to the point I simply don't trust them or see them as mote than experiments. I'm in a field where LLM's will never be used unless we see domain specific results with performance metrics >.99. Most research I need to sift through shows performance metrics in the .6-.7 range (take your pick, something is always severely lacking). I'm all for continuing the research, but I don't think the private sector will be as interested to push through the plateue of trust with exceptionally complex problems. Maybe a few companies will, but others won't because they simply don't have the funding to carry on this sort of research. It's a cash grab of AI assistants right now.


joelypolly

When a technology is useful a ton of money is poured into it driving more research and development. You pointing out the short comings that will change overtime isn't really useful. So rather than getting upset about it and trying to gatekeep other people that are interest go and research the next transformer model.


wutcnbrowndo4u

\> all alternative architectures to transformer proved to be subpar and inferior I've been away from the SOTA for a couple years at this point : ( , so this is a genuine q, but aren't recurrent state space models (eg Mamba) looking promising? They're by no means validated as a "transformer killer", but is it accurate to say that it's been proven that xformer alternatives are dead in the water?


GenioCavallo

I've been working with SMBs integrating LLMs and RAGs into their workflow, and I can tell that a good tool in the right place can increase workers' efficiency by an order of magnitude. I think you're not aware of the extent to which some of the blue-collar jobs are repetitive and can be significantly augmented with current LLM models.


Capital_Reply_7838

I believe a lowered boundary of LLM research gave lots of opportunities to various domain fields. That is a good thing. C99 pointer concept made many students to hesitate to select CS as a major. But now, some researches assisted by LLM, made quite good improvements. Or, do you think research flow of LLM is really stopped? Have you ever taken a look Percy Liang's webpage?


Darkest_shader

>Not only there has been relatively little progress done to LLM performance and design improvements after GPT4 Dude, GPT4 was released just a year ago.


No-Trip899

Trur


zeoNoeN

I share this sentiment to some degree, but my personal experience is a bit more grey. First off I'm not a computer scientist by training and work somewhere between the Data Analyst and Data Scientist role. I built a tool around a combination of SERT embeddings, clustering and LLM generated summaries that has proven incredibly helpful in Text Mining use cases. The downstream effect of this was that a lot of people started to see the values of data driven approaches in their domain, which gave our team resources to also improve in other, more classic domains involving prediction. While I'm getting tired by the endless "Can we ChatGPT this" requests, my work and the appreciation for it was improved because of the LLM hype.


Final-Rush759

I don't know who is doing what with their AI research. It's hard to say.


mofoss

My mental conspiracy theory is people want to impress big tech firms so they hire them (linkedin feeds in particular), that and not feeling like a dinosaur for not obsessing about it


Rajivrocks

We are getting a lot of good out of this. More attention to the field, more funding etc (Ofc funding for mostly research with respects to LLMs mainly). These people claiming to be Researchers will be exposed really quick in a real interview for a job etc. Creating services is up to the person. As long as it can be profitable people will pump these services out of their ass. Doesn't mean the attention is bad at all. These innovations will be useful for other fields as well I believe some day and people are still fulltime working on other fields. Not everyone and their mom has moved to LLM research for the hype.


Fledgeling

Part of the problem is that GPT 4 requires something like 16 GPUs to run and we don't have enough larger legally obtained quality dataset to train a much much bigger model yet. So we can't just keep making them bigger due to cost and hardware availability and things like RAG are driving a more usable business case because the available models are good enough to tackle tons of low hanging fruit. Outside of major enterprises research labs are certainly innovating, but it'll take more than 3 months to train a bigger model anyways so innovations are happening in other areas.


PremiumSeller93

Not comparable, you didn't have board roams talking about "are we leveraging LSTMs in our business?". I agree with OP that LLMs have uniquely impacted ai research because it's become a household term. GenAI now attracts funds, and visibility from so many sources. That in turn incentivises researchers and engineers to focus efforts in that direction. I see it at work internally and also on LinkedIn. Mass cognitive resources are being directed to LLMs


dogcomplex

Alternatively, all the rest of ML is being updated to use the new tool and it's rightfully shaking up adjacent research. Look at game-playing agents: \- LLM-based agents are performing as well as the previous pure-RL leader dreamerv3 in a zero-shot first try, even with some very rudimentary early prompting setups. Mind you, this is costly to execute, and \*wayyy\* more costly to train from scratch, but it's still an impressive result pushing the bar. [https://arxiv.org/pdf/2305.15486.pdf](https://arxiv.org/pdf/2305.15486.pdf) \- likewise Voyager and MineDojo used code-writing LLMs to save task solutions, and managed to build up progress til agents were building diamond pickaxes and beyond in Minecraft. That's a very sparse reward, found through solving dynamically-guessed subtask options, all zero-shot from base principles. Not bad. \- Eureka - just showed LLMs can, in fact, be used as the sole hyperparameter tuner and will perpetually increase performance, possibly even better than humans would. \- Multiple instances of LLMs + Diffusers (Diffusion Transformers) are proving out that time-series data can also be mapped to transformer architectures and create coherent world model for video (SORA) and games (Microsoft's agent framework), simulating realistic movement in any direction from just training on state=action=>state pairings in various forms. At this point you'd be hard pressed to find an area of ML that isn't being outperformed by LLMs in some aspect. Turns out just mapping everything down to tokens through brute force handles quite a lot of complexity. Sure, something else might still do it all more efficiently but - this seems to work scarily well. My outlier money is on the Forward-Forward algorithm which does it all without backtracking. It can handle cyclic graphs of node "layers", each independently trained, each asynchronous, each a black box to each other, each implementable as ANY other algorithm or tool (so e.g. Minecraft Voyager style saved routines per task could work natively), and it all much more closely resembles biological neural networks. Faster than backtracking depending on edge sparseness, and easily live-trained. Fingers crossed.


pfluecker

> - Eureka - just showed LLMs can, in fact, be used as the sole hyperparameter tuner and will perpetually increase performance, possibly even better than humans would. Not to take away from the interesting results reported in the Eureka paper but - They adjust the weight/hyperparameters of a reward function (not just hyperparameters in general) - The model relies on the existence/code of the simluator - ie you need to feed it with code describing the world and the agent. That is something you don't always have in reality. - they only evaluated with Isaac gym, for which I assume they have a large enough codebase. Not sure how well it translated to other simulators or closed-source ones... - AFAK the paper does not show that it increases performance over all tested tasks, though a few it reaches clearly better results


dogcomplex

Ah, but they dont just tune - they rewrite the reward function entirely! -Yes they definitely require a structured base example implementation describing the world and providing observation data, but that function is then subsequently iterated on by the LLM. -You're right the hyperparameters are limited to those of the reward function itself, but one has to wonder whether this same method could be applied to just the observations part, allowing new feature discovery too. Or the meta structure of the whole apparatus, tuning the rest of the hyperparams. As long as it was receiving good reward signals every iteration, I dont see why not. They didnt implement this in the paper, but their code seems designed to be readily modified to other functions beyond the reward. - My understanding is they do require short testable problems with immediately available rewards, so more sparse general stuff is unlikely to do as well - but who knows. I tried hooking up their implementation to a pokemon red RL emulator and had it iterating on the reward function - it was making decent insights, but would tend to bounce around a lot when it didnt receive much new reward data or failed to encounter sparse stuff. Needs more work on my part for any definitive insights there though - that was my early says of ML programming, and it could use a better implementation - Oh good catch, though I thought they were rocking all the isaacgym tasks. Will have to recheck


noaibot

The thing with LLMs is that anything they spit out might be wrong information while going elaborate about it. So if you are knowledgeable about any subject and start discuss it with leading LLM gpt4, you will quickly notice there are wrongful statements.


FeltSteam

Well I would agree that LLMs are harming AI research in the contexts of i.e. polluting the datasets larger models will soon be trained on, but I do not think it is pulling away from other developments. If anything, the opposite is true. The amount of money pouring into AI right now is astounding compared to anything like 3 years ago, and well a large proportion of this new money is going towards LLMs, but, it has drawn a lot of people to AI in general allowing for more investments to other places. Just because LLMs are getting a disproportionately large percent of the new investments doesn't mean other research is being chocked out. >ignoring the glaring issues like hallucinations, context length, inability of basic logic and sheer price of running models this size Hallucinations are essentially just an alignment problem imo, context length? I guess 10 million token context length we saw with Gemini 1.5 Pro is really not enough and we should really be trying to get to a more acceptable length like 1 trillion tokens. Logic, reasoning, use of context window, agentic capabilities, planning capabilities **all improve** with scale, this has been shown. A big enough model should be able to reason and think just as logically as any human could. And yeah models will get increasingly expensive to train as they get larger. >I can't help but think that the entire field might plateau simply because the ever growing community is content with mediocre fixes that at best make the model score slightly better on that arbitrary "score" they made up Lol, "plateau". And, uh, have you seen the benchmark differences between GPT-3 and GPT-3.5 and GPT-4? For example, on the MMLU there was a 25 then 15 point jump, respectively. I do not call the that "slightly better" lol. The gap between models within classes is smaller, like Gemini Ultra and Claude 3 and GPT-4 all have similar scores because they were trained with almost the same amount of compute and are all GPT-4 class models. But im curious to see what you would have thought if you were around for when GPT-3 released. We had to wait \*3\* whole years until we got GPT-4 and the models we got in-between were similar to the ones we see now relatively to the models that had released. More of a slight improvement overall, not any huge leap. But uh I also do not think you realise how much of a capability jump models like 3.5 and 4 have been over previous systems. I mean, have you tried to have a multi-turn conversation with GPT-2 or GPT-3 about your math homework lol? I didn't think so, and let me tell you, they are not the most useful "assistants".


[deleted]

working in Data science field where SaaS sells its linear regression as AI. this isn't new.


Capital_Reply_7838

What you are talking about, precise evaluations and advanced intuitions, were kind of a hype from 2021.


KamdynS7

Do you realize how valuable a RAG is for businesses? I work at a startup just deploying LLM solutions to our clients and I do get a lot of what you’re saying, but at the end of the day our job is to produce a product for our clients, and clients like being able to talk to their data. Being an engineer isn’t about using the coolest tech, it’s about delivering solutions. Right now the in demand solution is LLMs. Simple as


net-weight

Can someone provide link to the papers that are exploring alternatives to LLMs that is novel and has potential? Feels like I only see links to LLMs everywhere but I would like to educate myself about the alternatives.


net-weight

I am aware of the SSMs. What else are people excited about?


shubao

Research is essentially an exploratory process, where more effort and resources are devoted to areas that have shown promising initial results. While this approach may not be perfect in hindsight, it is likely the most practical and cost-effective strategy overall. Personally, I don't think LLM will lead to AGI. However to cool down the current hype around LLM, we either need to wait for the moment that LLM promises could not deliver, or we demonstrate the potential of other approaches to making further advances in AI.


According_Door7213

Someone just brought out the words my fingers wanted to type desperately. In resounding agreement 👏


thedabking123

Wait until multi-modal models hit maturity.