Edit model card

This repository hosts GGUF-Imatrix quantizations for ResplendentAI/Datura_7B.

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
    quantization_options = [
        "Q4_K_M", "Q5_K_M", "Q6_K", "Q8_0"
    ]

This is experimental.

For imatrix data generation, kalomaze's groups_merged.txt with added roleplay chats was used, you can find it here.

The goal is to measure the (hopefully positive) impact of this data for consistent formatting in roleplay chatting scenarios.

Original model information:

Datura 7B

image/jpeg

Flora with a bit of toxicity.

I've been making progress with my collection of tools, so I thought maybe I'd try something a little more toxic for this space. This should make for a more receptive model with fewer refusals.

Downloads last month
109
GGUF
Model size
7.24B params
Architecture
llama
Inference API (serverless) has been turned off for this model.

Collection including Lewdiculous/Datura_7B-GGUF-Imatrix