GGUF Creation from Llamafy

#1
by RonanMcGovern - opened

Thanks for making this. I had hoped this mix allow the model to be GGUF'd but an error persists:

params = Params(n_vocab=151936, n_embd=4096, n_layer=32, n_ctx=32768, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=1000000.0, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('models'))
Found vocab files: {'tokenizer.model': None, 'vocab.json': PosixPath('models/vocab.json'), 'tokenizer.json': PosixPath('models/tokenizer.json')}
Loading vocab file 'models/vocab.json', type 'spm'
Traceback (most recent call last):
  File "/workspace/llama.cpp/convert.py", line 1483, in <module>
    main()
  File "/workspace/llama.cpp/convert.py", line 1451, in main
    vocab, special_vocab = vocab_factory.load_vocab(args.vocab_type, model_parent_path)
  File "/workspace/llama.cpp/convert.py", line 1336, in load_vocab
    vocab = SentencePieceVocab(
  File "/workspace/llama.cpp/convert.py", line 394, in __init__
    self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer))
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 447, in Init
    self.Load(model_file=model_file, model_proto=model_proto)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] 

After calling:
!python convert.py models/

I know your repo isn't about GGUF, but given you understand how to Llamafy, I thought you might have ideas?

tokenizer_class": "Qwen2Tokenizer",

tokenizer_class": "Qwen2Tokenizer",

Right, that's what is in this tokenizer_config file..

Are you saying that's the problem, and it can't be solved?

OR, are you saying to change that somehow?

Thanks.

Can you provide transformers version?

Sure:

- `transformers` version: 4.38.1
- Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
- Accelerate config: 	not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: A6000
- Using distributed or parallel set-up in script?: Single GPU

I don't know.... Maybe you can download tokenizer.model again

Ok, no problem, I appreciate the help.

BTW, I don't see tokenizer.model, do you mean tokenizer_config.json?

Sign up or log in to comment