Edit model card

NOTE / ANNOUNCEMENT: We've jumped from V0.5 to this version, V1.0, this is the last version of the series. We're sad to announce the end of XT_AURORA, our first SLM series, due to no community activity. We, XeTute, have put in a lot of effort and countless nights to improve our models, but given on how much time, passion and effort we've put in, we got nothing back from the community. Thank you for so many downloads on this series of SLMs. We'll continue to update model cards and chat templates. Thank you for being part of our journey.

About this model: This model, XT_AURORA, trained and published by us, XeTute. The model was finetuned ontop of the previos beta-verion[XT_AURORA-OpenBeta-V0.5-GGUF]. This version[V1.0] achieves better general performance, it outperforms every previos model[V0.1 - V0.5].

About XT_AURORA: XT_AURORA is a series of SLMs[Slender Language Models], which all aim to provide a friendly, human-like conversation. The serie is limited by its size[about 1.1B Params], but we still try to get the best possible output. The context-length is very stable till 2048 tokens, after that limit, it will perform only slightly better than V0.5. It can be upscaled using rope, with the cost being slightly less logic.

About this version[V1.0]:

  • High quality output[sometimes outperforms 3B models in HumanEval], as long as the context size is under 2049 Tokens.
  • We provide a system prompt[Files and Versions --> chat_template]. The SLM was partly trained using that template, so the output is better if you use the prompt at start.
  • AURORA expects the chat template to be Vicuna[{{user}}: {some input}\nAURORA: {some output}\n{{user}}]. The model will only work correctly with this format.
  • Recommended temperature is from 0.4 to 0.75.
  • Improved chat quality in general emotional / unemotional chat, logical & illogical roleplaying, etc.

All in one, AURORA's aim is to provide a digital friend, which is also accessible to humans with low-end devices.

Using KoboldCPP, we got the model running[using termux] on a POCO X5 Pro 5G[CPU only, Octa Core]. We saw ~5 Tokens generation per second, ~15 Tokens processing per second. [In Energy Saver mode]

Support us: X: https://www.x.com/XeTute GitHub: https://www.github.com/N0CTRON/ Subdomain on Neocities: https://xetute.neocities.org/

We wish you a friendly chat with AURORA.

Downloads last month
134
GGUF
Model size
1.1B params
Architecture
llama
Unable to determine this model's library. Check the docs .

Collection including XeTute/AURORA-V1.0-GGUF