Edit model card

Model Card for DictaLM-2.0

The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text.

For full details of this model please read our release blog post.

This is the base model designed for completion (not for chat!) in the GGUF format for use with llama.cpp.

There are two versions available - float16 precision (*.F16.gguf) and 4-bit quantized precision (*.Q4_K_M.gguf).

You can view and access the full collection of base/instruct unquantized/quantized versions of DictaLM-2.0 here.

Model Architecture

DictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes:

  • An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.
  • Continued pretraining on over 190B tokens of naturally occuring text, 50% Hebrew and 50% English.

Notice

DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.

Citation

If you use this model, please cite:

[Will be added soon]
Downloads last month
251
GGUF
Model size
7.25B params
Architecture
llama
Unable to determine this model's library. Check the docs .

Collection including dicta-il/dictalm2.0-GGUF