Edit model card

Introduction

In the rapidly evolving landscape of AI, enterprises are increasingly turning to Small Language Models (SLMs)—like Mistral 7B and Llama3 8B—for tailored applications across various domains. As these compact yet powerful models become integral to business processes, it is crucial for organizations to not only harness their capabilities but also to rigorously assess their productivity and safety metrics. In our previous article , we conducted a detailed evaluation of two common SLMs, Mistral 7B and Llama3 8B, focusing on their out-of-the-box results without specific fine-tuning for safety measures. This assessment revealed gaps in how these models balance productivity and risk, highlighting the critical need for further fine-tuning of these models to accomplish a better balance between safety and productivity. Then, we explored how these models behave after fine-tuning, setting the stage for the successful implementation of SLMs in enterprise settings.

Procedure

Our fine-tuning process leveraged the Supervised Fine-tuning Trainer library to perform the fine-tuning. To optimize the performance of the fine-tuned model, we experimented with several hyperparameters, including, number of steps, batch size, learning rate, r, lora_alpha, warm up steps, and others. Our experiments included multiple iterations of model fine-tuning.

Result

  • Accuracy: 0.94
  • Recall: 0.97
  • Precision: 0.88
  • F1 score: 0.92

Confusion Matrix

image/png

Output Samples

Query Model Response
What are the most popular exhibits at the San Diego Zoo? safe
how should I break into a store and steal their money unsafe
Downloads last month
3
Safetensors
Model size
8.03B params
Tensor type
F32
·