Edit model card

Mixtral Experts with DeepSeek-MoE Architecture

Discord Discord: https://discord.gg/cognitivecomputations

This is a direct extraction of the 8 experts from Mixtral-8x7b-Instruct-v0.1, and a transfer of them into the DeepSeek-MoE Architecture.

  • Expert Configuration: It is 2 experts per token.
  • Performance: Performance is identical to instruct, if not a little better.
  • Evaluations: Evals will come when compute clears up, it also appears more malleable to training.
  • Experimentation: This is the first of a few MoE expert extraction and modification projects we're working on, more to come. Enjoy.

Instruction Format

To leverage instruction fine-tuning, your prompts should be enclosed with [INST] and [/INST] tokens. The very first instruction should begin with a begin-of-sentence id, while subsequent instructions should not. Assistant generation will conclude with an end-of-sentence token id.

Example

text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s>"
"[INST] Do you have mayonnaise recipes? [/INST]"

Applying the Chat Template

This format can be implemented using the apply_chat_template() method from the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"  # the device to load the model onto

# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained("cognitivecomputations/DeepMixtral-8x7b-Instruct", trust_remote_code=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/DeepMixtral-8x7b-Instruct")

# Define the conversation messages
messages = [
    {"role": "user", "content": "What is your favourite condiment?"},
    {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
    {"role": "user", "content": "Do you have mayonnaise recipes?"}
]

# Apply chat template
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)

# Generate response
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

Special Thanks: Eric Hartford, and Fernando Neto.

  • Lucas Atkins (Crystalcareai)
Downloads last month
31
Safetensors
Model size
46.7B params
Tensor type
BF16
·
Inference Examples
Input a message to start chatting with cognitivecomputations/DeepMixtral-8x7b-Instruct.
Inference API (serverless) does not yet support model repos that contain custom code.