MaziyarPanahi commited on
Commit
fdeda0e
1 Parent(s): ba07e03

Upload folder using huggingface_hub (#1)

Browse files

- de01553b7e8942244949e8b34e13f0d09bff405e2f9e103de8d6bb2c5444527a (396a57941e278252441af94d953fb675d94cd363)
- e02c3f238666f3f36272b63dd4316a9886e2e59a7f3432d628b67d757320cf2a (0f524cd835b2c4dd2fbaeabaa17ff7f5190a3675)
- eb728e4c28da057be9f68ddacdccbb53cd4cd534c9f843115af6d0051a261e4d (29c552df62be16c02a39cec6a41c7f262b5ad640)
- b86778eb4a65df6c998a7a5dfc5958c7b3b44621603c9e69eb3a1e4b07082042 (4c522360fab2d398b4c4acd014630eba7851f612)
- faf837e6b251ecc8c7ffabbf4231cbab8d4ecf8570e86822e83fb2af20f9ee4e (5f183157bbbfced59a62638c058249b0b7d0c41f)
- faabae5279b868a8754f7c7c99d5ca51e1fa9c952f8246b1f3cc6749917a451e (23da887f14f3644eff26446d9d412e173cfd427e)
- dcd55a35bef572ecf3e3d667249ee500a6d8dbd3fda050cd5f594529f3555842 (4d1b125c30e40a3e2d62b3db7963870f07d540cd)
- eaf18d04a237fa94fc8f7a311a057cd2c0cc2980ab58594ce1b68c558d921e0b (fe195301a47554d81116a5185c124beaebcd0b7f)
- 62582a3485c072dd9f75fc9ebf961fcd6c9b225791994c331797f178e74fb395 (2d6d6ffc5f336dde97f92ead6695a79ebb3bbe8d)
- e62dfe5729bb95ec65dd0474a3d17fbcc9a2673a52788016fa4d8edb8ce214ea (d962a00027ce141f4603d40da08d704b47d45e66)
- fcf31003291a7f78398014a06039f0fd4f9afbbed70baa7266bc4e3152c0e717 (f30ec8b35fdd497318bcf48786f482aa3ed8bc6d)
- 5cedf553080447d18dcfaf723717267632e688b9e9c5d2daa3b4a85a954cd4de (9f41f44b52d36f9d9b0071c8592c82bd2f9330d4)

.gitattributes CHANGED
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Llama-3-8B-Instruct-64k.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Llama-3-8B-Instruct-64k.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Llama-3-8B-Instruct-64k.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Llama-3-8B-Instruct-64k.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Llama-3-8B-Instruct-64k.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Llama-3-8B-Instruct-64k.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Llama-3-8B-Instruct-64k.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Llama-3-8B-Instruct-64k.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Llama-3-8B-Instruct-64k.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Llama-3-8B-Instruct-64k.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Llama-3-8B-Instruct-64k.fp16.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-8B-Instruct-64k.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0828acdf90012dc15f3d727f90b8a2e7f342552fc2f5d751c26afc8b0d2f76ac
3
+ size 3179131136
Llama-3-8B-Instruct-64k.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00bc7e534e58391ec947e5d1793ca36153385e6a30e3d8cb2a73ba25d09075fe
3
+ size 4321956096
Llama-3-8B-Instruct-64k.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6bf62f735697a8ab8010cf46fd34ec67bce077d842b4ed496190ed3794ab44d
3
+ size 4018917632
Llama-3-8B-Instruct-64k.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63df14ccb2d155514055e783f4b4db5dffc1e27b2395bdffd38ad7ea9aacf8f4
3
+ size 3664498944
Llama-3-8B-Instruct-64k.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83ddb9e0d2f98446c124506c21a792a482594f77ea7e4552c087357d11a2e0d3
3
+ size 4920733952
Llama-3-8B-Instruct-64k.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b99169d236789a774928a21f660270ac3b1d86152787b7919ded216580879a7
3
+ size 4692668672
Llama-3-8B-Instruct-64k.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:912687f2b75ca31331bfcf8b55a34a366dbb0f31df6bf65bc464c1d2431b92be
3
+ size 5732987136
Llama-3-8B-Instruct-64k.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dac14e0f8fef40f702211a6657c3e8c9b785f1f9fdda5f605f87a021255dadd0
3
+ size 5599293696
Llama-3-8B-Instruct-64k.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f05c02f26c5a6e6ab6bfa2c518e5f66cbcabb51e96ceac700808f4fb4548dc41
3
+ size 6596006144
Llama-3-8B-Instruct-64k.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:576df2848b746d2aef15c2c884baa95a56da7c287a78f34444a7497eaa866c95
3
+ size 8540770560
Llama-3-8B-Instruct-64k.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17c211c1a49a70ad710b6767b418c8d9965bf46dc3e3053e7961cb5e57928ccf
3
+ size 16068890848
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - llama
13
+ - llama-3
14
+ - text-generation
15
+ model_name: Llama-3-8B-Instruct-64k-GGUF
16
+ base_model: MaziyarPanahi/Llama-3-8B-Instruct-64k
17
+ inference: false
18
+ model_creator: MaziyarPanahi
19
+ pipeline_tag: text-generation
20
+ quantized_by: MaziyarPanahi
21
+ ---
22
+ # [MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF)
23
+ - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
24
+ - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-64k](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k)
25
+
26
+ ## Description
27
+ [MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-64k](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k).
28
+
29
+ ### About GGUF
30
+
31
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
32
+
33
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
34
+
35
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
36
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
37
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
38
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
39
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
40
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
41
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
42
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
43
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
44
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
45
+
46
+ ## Special thanks
47
+
48
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.