This model is amazingly good but seems lost in the `llama 3` hype :(

#8
by jukofyork - opened

This model seems to have been lost in all the hype of llama 3 but IMO it's actually the current best open LLM for coding so far - much better than the official mixtral-instructfrom my testing.

It's especially good at refactoring code: other models I always have to break up the work for them or else they get confused or just rename a few variables, etc; this seems to make its own plan and really do a thorough job of refactoring!

I'm not sure why the official mixtral-instructis so much worse as previously with the 8x7b mixtral-instructit was the other way round and no fine tunes were as good... My Q4_K_Sof the official mixtral-instructalso seems to randomly stop and there are reports of other weirdness in the HF discussions for the non-quantized version too.

https://reddit.com/comments/1c9s4mf/comment/l0no0y9

... I did notice that the together.ai version does have this strange tendency to cut off its answer with a few sentences left in its response. I can get past this simply by typing "more" but it is a bit annoying when it happens over and over.

Strange he gets this with this model too! I wonder if there is a bug in the conversion from HF or even a bug in the original HF json files causing this?


https://reddit.com/comments/1c9s4mf/comment/l0oo08b

It just wasn't worth the headache for a marginal improvement over just something like "Answer the following question:". I typically find having at least that much of a system prompt helps some models from not just ending immediately or going off on a tangent.

I can't tell if he's referring to wizard-lm-2specifically in this post or if the "ending immediately" refers to the same problem manifestation, but interestingly I didn't use a system prompt so will try that for the official mixtral-instructnow too.

Sign up or log in to comment