Dicta-LM 2.0 Collection
Collection
9 items
•
Updated
•
5
The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text.
For full details of this model please read our release blog post.
This is the base model designed for completion (not for chat!) in the GGUF format for use with llama.cpp.
There are two versions available - float16 precision (*.F16.gguf
) and 4-bit quantized precision (*.Q4_K_M.gguf
).
You can view and access the full collection of base/instruct unquantized/quantized versions of DictaLM-2.0
here.
DictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes:
DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.
If you use this model, please cite:
[Will be added soon]