Slow inference speed and high VRAM using huggingface transformers

#2
by Starlento - opened

When using transformers to inference this model (the example code in README), the VRAM will keep increasing to around 14GB and the output is just a short response.
It seems it is inferencing by default 4000 tokens without noticing the stop_token <|im_end|>.

And I found a duplicated code here:
https://huggingface.co/openbmb/MiniCPM-2B-128k/blob/cc0a7847338aa03a64d575e0d7ff54d4bd008ff7/modeling_minicpm.py#L1326

I do not know how to input the stop_token in your self.generate() function, could you kindly fix it?

OpenBMB org

Hi, sorry for the late reply. We recommend using vLLM for generation, where you can specify a stop token. Please check our blog for usage examples.

Hi, sorry for the late reply. We recommend using vLLM for generation, where you can specify a stop token. Please check our blog for usage examples.

Sorry as well that I forgot this issue. In fact for transformers, I can set the stop token by change the tokenizer's eos_token.
Thank you very much for your reply, it is never late. By the way, will VLLM implementation have some VRAM saving compared to transformers?

Starlento changed discussion status to closed

Sign up or log in to comment