Text Generation
Transformers
PyTorch
Safetensors
English
llama
conversational
Inference Endpoints
text-generation-inference

Add indentation (4 spaces) to `tokenizer_config.json` for readability

#5
by alvarobartt - opened
No description provided.

Also I think that a potential improvement would be to add the chat_template as mentioned by @lewtun already at https://huggingface.co/allenai/tulu-2-dpo-70b/discussions/2, even though Tulu is probably intended for instruction-following scenarios rather than chat-like ones, as there seems to be no "system" or similar, but templating may help users with formatting πŸ’ͺ🏻

Allen Institute for AI org

Thanks! Yeah, I'm planning to add the chat_template, since it seems quite useful.

hamishivi changed pull request status to merged

Sign up or log in to comment