[AUTOMATED] Model Memory Requirements
#15 opened 8 days ago
by
model-sizer-bot
The Model Stop Engaging in conversation
2
#14 opened 17 days ago
by
Albihany
generation_config.json adds a mapping with the special token '<|im_end|>' to solve the problem of non-stop generation when <|im_end|> is encountered.
#13 opened 18 days ago
by
zjyhf
The tokenizer adds a special token '<|im_end|>' to solve the problem of non-stop generation when encountering <|im_end|>.
#12 opened 18 days ago
by
zjyhf
About tokens used in this model.
1
#8 opened 25 days ago
by
icoicqico
Multi-lang?
1
#6 opened 28 days ago
by
DalyD
Upload to ollama
#5 opened 29 days ago
by
nonetrix
Adding `safetensors` variant of this model
#4 opened about 1 month ago
by
lucataco
🚩 Report: Legal issue(s)
3
#3 opened about 1 month ago
by
deleted
Should be "Llama 3ChatQA-1.5-70B"
3
#2 opened about 1 month ago
by
just1moremodel
Concerns regarding Prompt Format
6
#1 opened about 1 month ago
by
wolfram