Tech Insights
PEFT

PEFT

Last updated , generated by Sumble
Explore more →

What is PEFT?

PEFT (Parameter-Efficient Fine-Tuning) is a set of techniques that enable the efficient adaptation of pre-trained language models (PLMs) to various downstream tasks. Instead of fine-tuning all the parameters of a large PLM, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Common PEFT methods include LoRA (Low-Rank Adaptation), Prefix-Tuning, Prompt Tuning, and Adapter Tuning. These techniques are commonly used to adapt large language models to specific tasks while minimizing resource consumption.

What other technologies are related to PEFT?

PEFT Complementary Technologies

QLoRA is a quantization technique often used to reduce memory footprint during PEFT training, making it directly complementary.
mentioned alongside PEFT in 24% (76) of relevant job posts
LoRA (Low-Rank Adaptation) is a specific PEFT technique that modifies the weights of a pre-trained model for a specific task. PEFT encompasses LoRA.
mentioned alongside PEFT in 4% (243) of relevant job posts
Reinforcement Learning from Human Feedback can be used *with* PEFT to fine-tune models based on human preferences, making them complementary.
mentioned alongside PEFT in 10% (70) of relevant job posts

Which organizations are mentioning PEFT?

Organization
Industry
Matching Teams
Matching People
PEFT
Google
Scientific and Technical Services
PEFT
SirionLabs
Scientific and Technical Services
PEFT
Apple
Scientific and Technical Services

This tech insight summary was produced by Sumble. We provide rich account intelligence data.

On our web app, we make a lot of our data available for browsing at no cost.

We have two paid products, Sumble Signals and Sumble Enrich, that integrate with your internal sales systems.