--- license: cc-by-4.0 tags: - requests - gguf - quantized # > [!WARNING] # > **Notice:**
# > Requests are paused at the moment due to unforseen circumstances. --- ![requests-banner/png](https://huggingface.co/Lewdiculous/Model-Requests/resolve/main/requests-banner.png) > [!TIP] > I apologize for disrupting your experience.
> My upload speeds have been cooked and unstable lately.
> I'd need to move to get a better provider or eventually rent a server.
> If you **want** and you are able to...
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous)
> In the meantime I'll be working to make do with the resources at hand at the time.
# Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card! Please read everything. This card is meant only to request GGUF-IQ-Imatrix quants for models that meet the requirements bellow. **Requirements to request GGUF-Imatrix model quantizations:** For the model: - Maximum model parameter size of **11B**.
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.*
*Preferably for Mistral and LLama-3 based models in the creative/roleplay niche.*
*If you need a bigger model, you can try requesting at [mradermacher's](https://huggingface.co/mradermacher/model_requests). Pretty awesome.* Important: - Fill the request template as outlined in the next section. #### How to request a model quantization: 1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks. 2. Include the following template in your post and fill the required information ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)): ``` **[Required] Model name:** **[Required] Model link:** **[Required] Brief description:** **[Required] An image/direct image link to represent the model (square shaped):** **[Optional] Additonal quants (if you want any):** Default list of quants for reference: "IQ3_M", "IQ3_XXS", "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K", "Q8_0" ```