Edit model card

Official AQLM quantization of meta-llama/Meta-Llama-3-8B-Instruct .

For this quantization, we used 1 codebook of 16 bits.

Results:

Model Quantization MMLU (5-shot) GSM8k (8-shot) ArcC ArcE Hellaswag Winogrande PiQA Model size, Gb
meta-llama/Meta-Llama-3-8B-Instruct None 0.6560 0.7475 0.5299 0.8165 0.5771 0.7867 0.7206 16.1
1x16 0.5872 0.5087 0.4590 0.7710 0.5491 0.7726 0.6953 4.1

UPD 02.05.2024

The version of model with improved fine-tuning procedure.

Downloads last month
327
Safetensors
Model size
2.04B params
Tensor type
FP16
·
I16
·

Collection including ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16