source code and paper?

#6
by josephykwang - opened

Any writeup about

  1. how did you decide on these two models?
  2. what merge technique do you use?
Owner

I am a loyal player of kaggle.
The most important thing I learned from Kaggle is model ensemble or stacking is all you need.
I believe this also applies to transformers.

How to ensemble two LLMs? I searched the Internet, found a tool call LLM-Blender, did you use it?

Owner

maybe this one?

Chai Research presents Blending Is All You Need

Cheaper, Better Alternative to Trillion-Parameters LLM

In conversational AI research, there's a noticeable trend towards developing models with a larger number of parameters, exemplified by models like ChatGPT. While these expansive models tend to generate increasingly better chat responses, they demand significant computational resources and memory. This study explores a pertinent question: Can a combination of smaller models collaboratively achieve comparable or enhanced performance relative to a singular large model? We introduce an approach termed "blending", a straightforward yet effective method of integrating multiple chat AIs. Our empirical evidence suggests that when specific smaller models are synergistically blended, they can potentially outperform or match the capabilities of much larger counterparts. For instance, integrating just three models of moderate size (6B/13B paramaeters) can rival or even surpass the performance metrics of a substantially larger model like ChatGPT (175B+ paramaters). This hypothesis is rigorously tested using A/B testing methodologies with a large user base on the Chai research platform over a span of thirty days. The findings underscore the potential of the "blending" strategy as a viable approach for enhancing chat AI efficacy without a corresponding surge in computational demands.

https://arxiv.org/abs/2401.04088 is a sparse moe. in their https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1, there are 8 experts.

in you model card,

  1. you are using two models
  2. it is not clear if you are simply "merging" two dense models' outputs

maybe this one?

Chai Research presents Blending Is All You Need

Cheaper, Better Alternative to Trillion-Parameters LLM

see This means that the different chat AIs are able to implicitly influence the output of the current response. As a result, the current response is a blending of individual chat AI strengths, as they collaborate to create an overall more engaging conversation.

don't think this is not the same as MOE approach

Have u train the model? these chat model even have different template

Sign up or log in to comment