Edit model card

🌟 Buying me coffee is a direct way to show support for this project.

AdaptLLM-4x7B-MoE

AdaptLLM-4x7B-MoE is a Mixure of Experts (MoE) made with the following models using LazyMergekit:

πŸ’» Usage

Prompt Template:

<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>

{{ user_message }} [/INST]
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Isotonic/AdaptLLM-4x7B-MoE"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={
    "torch_dtype": torch.float16,
    "low_cpu_mem_usage": True,
    "use_cache" : False,
    "gradient_checkpointing" : True,
    "device_map" : 'auto',
    "load_in_8bit" : True
    },
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

🧩 Configuration

base_model: mlabonne/NeuralBeagle14-7B
experts:
  - source_model: mlabonne/NeuralBeagle14-7B
    positive_prompts:
    - "chat"
    - "assistant"
    - "tell me"
    - "explain"
    - "storywriting"
    - "write"
    - "scene"
    - "story"
    - "character"
    - "instruct"
    - "summarize"
    - "count"

  - source_model: AdaptLLM/finance-chat
    positive_prompts:
    - "personal finance"
    - "budgeting"
    - "investing"
    - "retirement planning"
    - "debt management"
    - "financial education"
    - "consumer protection"
    - "financial"
    - "money"
    - "investment"
    - "banking"
    - "stock"
    - "bond"
    - "portfolio"
    - "risk"
    - "return"

  - source_model: AdaptLLM/medicine-chat
    positive_prompts:
    - "diagnose"
    - "treat"
    - "disease"
    - "symptom"
    - "medication"
    - "anatomy"
    - "physiology"
    - "pharmacology"
    - "clinical trial"
    - "medical research"

  - source_model: AdaptLLM/law-chat
    positive_prompts:
    - "law"
    - "legal"
    - "attorney"
    - "lawyer"
    - "court"
    - "contract"
    - "criminal"
    - "evidence"
    - "procedure"
    - "contracts"
    - "mergers & acquisitions"
    - "corporate governance"
    - "intellectual property"
    - "employment law"
    - "international trade"
    - "competition law"
    - "antitrust"
    - "litigation"
    - "arbitration"
    - "mediation"
Downloads last month
3
Safetensors
Model size
20.2B params
Tensor type
FP16
Β·

Datasets used to train Isotonic/AdaptLLM-4x7B-MoE

Collection including Isotonic/AdaptLLM-4x7B-MoE