![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a78de7c5c68c7762886eb/7tt61njxHUvg0xtEc-EeI.png) --- language: - zh license: apache-2.0 metrics: - accuracy pipeline_tag: text-generation --- >> It's not a chat model, just using Wizard-LM-Chinese-instruct-evol datesets training with several steps for test the model typical Chinese skill, >> this is version1, will release version2 for more long context windows and Chat model >>____________________________ >>Train scenario: >>2k context >>datasets:Wizard-LM-Chinese-instruct-evol >>batchsize:8 >>steps:500 >>epchos:2 >>____________________________________________________ >>How to use? >>Follow common huggingface-api is enough or using other framework like VLLM, support continue training. ____________________________________________________ >>import transformers >>import torch >>model_id = "BoyangZ/llama3-chinese" >>pipeline = transformers.pipeline( >> "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" >> ) >>pipeline("川普和拜登谁能赢得大选??") >> [{'generated_text': '川普和拜登谁能赢得大选?](https://www.voachinese.com'}] >> >> import torch >> from transformers import AutoModelForCausalLM, AutoTokenizer >> torch.set_default_device("cuda") >> model = AutoModelForCausalLM.from_pretrained("BoyangZ/llama3-chinese", torch_dtype="auto", trust_remote_code=True) >> tokenizer = AutoTokenizer.from_pretrained("BoyangZ/llama3-chinese", trust_remote_code=True) >> inputs = tokenizer( >> "川普和拜登一起竞选,美国总统,谁获胜的几率大,分析一下?", >> return_tensors="pt", >> return_attention_mask=False >> ) >> outputs = model.generate(**inputs, max_length=200) >> text = tokenizer.batch_decode(outputs)[0] >> print(text) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a78de7c5c68c7762886eb/t-Ij3o1PwBzZMv8hg9bJM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a78de7c5c68c7762886eb/Wa1etCkic0Uz2mDOgmjY5.png) >>Wechat:18618377979, Gmail:zhouboyang1983@gmail.com