fine tune on custom chat dataset using QLORA & PEFT

#19
by yashk92 - opened

Is it possible to fine-tune these GPTQ especially this llama2-13b-chat-gptq model on custom chat dataset? if yes, what is the best structure to prepare the data for it? I plan to use it with langchain to create a conversational bot (just like ChatGPT) so I might use the ConversationChain to achieve the same. Thanks!

Here's a related Github issue

Here's a related Github issue

Page not found! @RonanMcGovern 🫤

Howdy, yeah it may have been taken down because work is underway to seamlessly integrate it with TheBloke's models . Right now, you can do the fine tuning but you need to start with one of their models.

See here for the Colab notebook.

Apparently this will all be fixed up within 1-2 weeks.

Sign up or log in to comment