Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

used chat vector extraction method

base model = beomi/Llama-3-Open-Ko-8B v2 base model = sosoai/hansoldeco-beomi-llama3-8b-ko-v0.1 (hansoldeco domain own dataset)

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch

tokenizer = AutoTokenizer.from_pretrained("sosoai/hansoldeco-beomi-llama3-8b-ko-v0.1")
model_name = "sosoai/hansoldeco-beomi-llama3-8b-ko-v0.2-chatvector"  

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

conversation = [
    {'role': 'user', 'content': "μ•ˆλ…•ν•˜μ„Έμš”! λ„ˆλŠ” λˆ„κ΅¬μ„Έμš”?"},
    {'role': 'assistant', 'content': "μ €λŠ” ν•œμ†”λ°μ½” 도배, 벽지 그리고 마감자재 ν•˜μž μ „λ¬Έ μ±—λ΄‡μž…λ‹ˆλ‹€. 무엇을 λ„μ™€λ“œλ¦΄κΉŒμš”? 이와 κ΄€λ ¨λœ λŒ€ν•œ λͺ¨λ“  μ§ˆλ¬Έμ„ ν•΄μ£Όμ„Έμš”."},
    {'role': 'user', 'content': "μ•ˆλ…•ν•˜μ„Έμš”! λ‹Ήμ‹ μ˜ 이름을 μ•Œλ €μ£Όμ„Έμš”. λ²½μ§€ν•˜μžλŠ” μ–΄λ–»κ²Œ ν•΄κ²°ν•΄μš”?"}
]

prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, use_cache=True, max_length=256)
output_text = tokenizer.decode(outputs[0])
print(output_text)
Downloads last month
0
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·