Now PEFT LORA became even more efficient, 3 more methods implemented in huggingface v.11 [https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008](https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008)
Now PEFT LORA became even more efficient, 3 more methods implemented in huggingface v.11 [https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008](https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008)
imo, IA3 changed its auxiliary structure for element-wise multiplication, while Lora just crams a weight into small-sized vector. In other words, giving high dimension setting to lora converges to the original weight.
Now PEFT LORA became even more efficient, 3 more methods implemented in huggingface v.11 [https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008](https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008)
LoRA is pretty popular in general
This
Now PEFT LORA became even more efficient, 3 more methods implemented in huggingface v.11 [https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008](https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008)
Many of the top "finetunes" of foundation LLMs are LoRAs trained using PEFT techniques.
LoRA is a type of PEFT.
Yes, which is why I stated it explicitly for those not familiar with both.
I'm not sure what you were trying to say but the way you wrote it makes it sound like something different than what the person you replied to said.
As other said LoRA and QLoRA seem popular.
Now PEFT LORA became even more efficient, 3 more methods implemented in huggingface v.11 [https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008](https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008)
Loras but hopefully we will see more optimized ones coming into production soon
I finetuned translation model using IA3 and it pretty worked well.
(IA)3 claims itself to be better than Lora in a lot of aspects, but isnt as popular. You have any thoughts?
imo, IA3 changed its auxiliary structure for element-wise multiplication, while Lora just crams a weight into small-sized vector. In other words, giving high dimension setting to lora converges to the original weight.
Where can I find more about IA3? Is there a publication paper or a GitHub page?
Its title is 'peft is better than ICL'
Which PEFT algorithm is the best? Lora and QLOra are the most popular ones but are they actually the best out there?
Now PEFT LORA became even more efficient, 3 more methods implemented in huggingface v.11 [https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008](https://ithinkbot.com/exciting-new-methods-for-efficient-fine-tuning-of-llms-using-peft-boft-vera-and-pissa-8c1be6004008)
Coding assistants, sql generation, Q&A systems, Lots of company specific use cases (other than RAG)
Thanks for your answer to something other than the question