SetFit documentation

SetFit

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

SetFit

๐Ÿค— SetFit is an efficient and prompt-free framework for few-shot fine-tuning of Sentence Transformers. It achieves high accuracy with little labeled data - for instance, with only 8 labeled examples per class on the Customer Reviews sentiment dataset, ๐Ÿค— SetFit is competitive with fine-tuning RoBERTa Large on the full training set of 3k examples!

Compared to other few-shot learning methods, SetFit has several unique features:

  • ๐Ÿ—ฃ No prompts or verbalizers: Current techniques for few-shot fine-tuning require handcrafted prompts or verbalizers to convert examples into a format suitable for the underlying language model. SetFit dispenses with prompts altogether by generating rich embeddings directly from text examples.
  • ๐ŸŽ Fast to train: SetFit doesnโ€™t require large-scale models like T0, Llama or GPT-4 to achieve high accuracy. As a result, it is typically an order of magnitude (or more) faster to train and run inference with.
  • ๐ŸŒŽ Multilingual support: SetFit can be used with any Sentence Transformer on the Hub, which means you can classify text in multiple languages by simply fine-tuning a multilingual checkpoint.