Tonylin52 commited on
Commit
91ce4f3
1 Parent(s): cf4c08b

Upload 10 files

Browse files
LICENSE.txt ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright Zhengxiao Du
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
MODEL_LICENSE.txt ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The GLM-130B License
2
+
3
+ 1. Definitions
4
+
5
+ “Licensor” means the GLM-130B Model Team that distributes its Software.
6
+
7
+ “Software” means the GLM-130B model parameters made available under this license.
8
+
9
+ 2. License Grant
10
+
11
+ Subject to the terms and conditions of this License, the Licensor hereby grants to you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty-free copyright license to use the Software solely for your non-commercial research purposes.
12
+
13
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
14
+
15
+ 3. Restriction
16
+
17
+ You will not use, copy, modify, merge, publish, distribute, reproduce, or create derivative works of the Software, in whole or in part, for any commercial, military, or illegal purposes.
18
+
19
+ You will not use the Software for any act that may undermine China's national security and national unity, harm the public interest of society, or infringe upon the rights and interests of human beings.
20
+
21
+ 4. Disclaimer
22
+
23
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
24
+
25
+ 5. Limitation of Liability
26
+
27
+ EXCEPT TO THE EXTENT PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER BASED IN TORT, NEGLIGENCE, CONTRACT, LIABILITY, OR OTHERWISE WILL ANY LICENSOR BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES, OR ANY OTHER COMMERCIAL LOSSES, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
28
+
29
+ 6. Dispute Resolution
30
+
31
+ This license shall be governed and construed in accordance with the laws of People’s Republic of China. Any dispute arising from or in connection with this License shall be submitted to Haidian District People's Court in Beijing.
32
+
33
+ Note that the license is subject to update to a more comprehensive version. For any questions related to the license and copyright, please contact us at glm-130b@googlegroups.com.
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - glm
7
+ - chatglm
8
+ - thudm
9
+ ---
10
+ # ChatGLM-6B
11
+ ## 介绍
12
+ ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。
13
+
14
+ ChatGLM-6B is an open bilingual language model based on [General Language Model (GLM)](https://github.com/THUDM/GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference.
15
+
16
+ ## 软件依赖
17
+
18
+ ```shell
19
+ pip install protobuf==3.20.0 transformers==4.26.1 icetk cpm_kernels
20
+ ```
21
+
22
+ ## 代码调用
23
+
24
+ 可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
25
+
26
+ ```ipython
27
+ >>> from transformers import AutoTokenizer, AutoModel
28
+ >>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
29
+ >>> model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
30
+ >>> response, history = model.chat(tokenizer, "你好", history=[])
31
+ >>> print(response)
32
+ 你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
33
+ >>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
34
+ >>> print(response)
35
+ 晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
36
+
37
+ 1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
38
+ 2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
39
+ 3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
40
+ 4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
41
+ 5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
42
+ 6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
43
+
44
+ 如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
45
+ ```
46
+
47
+ 关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM-6B)。
48
+
49
+ For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM-6B).
50
+
51
+ ## 协议
52
+
53
+ 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
54
+
55
+ ## 引用
56
+
57
+ 如果你觉得我们的工作有帮助的话,请考虑引用下列论文:
58
+
59
+ ```
60
+ @inproceedings{
61
+ zeng2023glm-130b,
62
+ title={{GLM}-130B: An Open Bilingual Pre-trained Model},
63
+ author={Aohan Zeng and Xiao Liu and Zhengxiao Du and Zihan Wang and Hanyu Lai and Ming Ding and Zhuoyi Yang and Yifan Xu and Wendi Zheng and Xiao Xia and Weng Lam Tam and Zixuan Ma and Yufei Xue and Jidong Zhai and Wenguang Chen and Zhiyuan Liu and Peng Zhang and Yuxiao Dong and Jie Tang},
64
+ booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
65
+ year={2023},
66
+ url={https://openreview.net/forum?id=-Aw0rrrPUF}
67
+ }
68
+ ```
69
+ ```
70
+ @inproceedings{du2022glm,
71
+ title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
72
+ author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
73
+ booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
74
+ pages={320--335},
75
+ year={2022}
76
+ }
77
+ ```
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "THUDM/chatglm-6b",
3
+ "architectures": [
4
+ "ChatGLMModel"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_chatglm.ChatGLMConfig",
8
+ "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration",
9
+ "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration"
10
+ },
11
+ "bos_token_id": 150004,
12
+ "eos_token_id": 150005,
13
+ "hidden_size": 4096,
14
+ "inner_hidden_size": 16384,
15
+ "layernorm_epsilon": 1e-05,
16
+ "max_sequence_length": 2048,
17
+ "model_type": "chatglm",
18
+ "num_attention_heads": 32,
19
+ "num_layers": 28,
20
+ "position_encoding_2d": true,
21
+ "torch_dtype": "float16",
22
+ "transformers_version": "4.23.1",
23
+ "use_cache": true,
24
+ "vocab_size": 150528
25
+ }
ice_text.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99871e0c85db81ad7af1028854fd091cd5778c8414ae9d94bbbc10d02c831c21
3
+ size 2699926
modeling_chatglm.py ADDED
@@ -0,0 +1,1261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ PyTorch ChatGLM model. """
2
+
3
+ import math
4
+ import copy
5
+ import os
6
+ import warnings
7
+ import re
8
+
9
+ import torch
10
+ import torch.utils.checkpoint
11
+ import torch.nn.functional as F
12
+ from torch import nn
13
+ from torch.nn import CrossEntropyLoss, LayerNorm
14
+ from torch.nn.utils import skip_init
15
+ from typing import Optional, Tuple, Union, List, Callable
16
+
17
+ from transformers.utils import (
18
+ add_code_sample_docstrings,
19
+ add_start_docstrings,
20
+ add_start_docstrings_to_model_forward,
21
+ )
22
+ from transformers.modeling_outputs import (
23
+ BaseModelOutputWithPast,
24
+ CausalLMOutputWithPast,
25
+ BaseModelOutputWithPastAndCrossAttentions,
26
+ )
27
+ from transformers.modeling_utils import PreTrainedModel
28
+ from transformers.utils import logging
29
+ from transformers.generation.logits_process import LogitsProcessor
30
+ from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig
31
+
32
+ from .configuration_chatglm import ChatGLMConfig
33
+
34
+ # flags required to enable jit fusion kernels
35
+ torch._C._jit_set_profiling_mode(False)
36
+ torch._C._jit_set_profiling_executor(False)
37
+ torch._C._jit_override_can_fuse_on_cpu(True)
38
+ torch._C._jit_override_can_fuse_on_gpu(True)
39
+
40
+ logger = logging.get_logger(__name__)
41
+
42
+ _CHECKPOINT_FOR_DOC = "THUDM/ChatGLM-6B"
43
+ _CONFIG_FOR_DOC = "ChatGLM6BConfig"
44
+
45
+ CHATGLM_6B_PRETRAINED_MODEL_ARCHIVE_LIST = [
46
+ "THUDM/chatglm-6b",
47
+ # See all ChatGLM-6B models at https://huggingface.co/models?filter=chatglm
48
+ ]
49
+
50
+
51
+ class InvalidScoreLogitsProcessor(LogitsProcessor):
52
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
53
+ if torch.isnan(scores).any() or torch.isinf(scores).any():
54
+ scores.zero_()
55
+ scores[..., 20005] = 5e4
56
+ return scores
57
+
58
+
59
+ def load_tf_weights_in_chatglm_6b(model, config, tf_checkpoint_path):
60
+ """Load tf checkpoints in a pytorch model."""
61
+ try:
62
+ import re
63
+
64
+ import numpy as np
65
+ import tensorflow as tf
66
+ except ImportError:
67
+ logger.error(
68
+ "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
69
+ "https://www.tensorflow.org/install/ for installation instructions."
70
+ )
71
+ raise
72
+ tf_path = os.path.abspath(tf_checkpoint_path)
73
+ logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
74
+ # Load weights from TF model
75
+ init_vars = tf.train.list_variables(tf_path)
76
+ names = []
77
+ arrays = []
78
+ for name, shape in init_vars:
79
+ logger.info(f"Loading TF weight {name} with shape {shape}")
80
+ array = tf.train.load_variable(tf_path, name)
81
+ names.append(name)
82
+ arrays.append(array)
83
+
84
+ for name, array in zip(names, arrays):
85
+ name = name.split("/")
86
+ # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
87
+ # which are not required for using pretrained model
88
+ if any(
89
+ n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"]
90
+ for n in name
91
+ ):
92
+ logger.info(f"Skipping {'/'.join(name)}")
93
+ continue
94
+ pointer = model
95
+ for m_name in name:
96
+ if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
97
+ scope_names = re.split(r"_(\d+)", m_name)
98
+ else:
99
+ scope_names = [m_name]
100
+ if scope_names[0] == "kernel" or scope_names[0] == "gamma":
101
+ pointer = getattr(pointer, "weight")
102
+ elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
103
+ pointer = getattr(pointer, "bias")
104
+ elif scope_names[0] == "output_weights":
105
+ pointer = getattr(pointer, "weight")
106
+ elif scope_names[0] == "squad":
107
+ pointer = getattr(pointer, "classifier")
108
+ else:
109
+ try:
110
+ pointer = getattr(pointer, scope_names[0])
111
+ except AttributeError:
112
+ logger.info(f"Skipping {'/'.join(name)}")
113
+ continue
114
+ if len(scope_names) >= 2:
115
+ num = int(scope_names[1])
116
+ pointer = pointer[num]
117
+ if m_name[-11:] == "_embeddings":
118
+ pointer = getattr(pointer, "weight")
119
+ elif m_name == "kernel":
120
+ array = np.transpose(array)
121
+ try:
122
+ assert (
123
+ pointer.shape == array.shape
124
+ ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
125
+ except AssertionError as e:
126
+ e.args += (pointer.shape, array.shape)
127
+ raise
128
+ logger.info(f"Initialize PyTorch weight {name}")
129
+ pointer.data = torch.from_numpy(array)
130
+ return model
131
+
132
+
133
+ @torch.jit.script
134
+ def gelu_impl(x):
135
+ """OpenAI's gelu implementation."""
136
+ return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * x *
137
+ (1.0 + 0.044715 * x * x)))
138
+
139
+
140
+ def gelu(x):
141
+ return gelu_impl(x)
142
+
143
+
144
+ class RotaryEmbedding(torch.nn.Module):
145
+ def __init__(self, dim, base=10000, precision=torch.half, learnable=False):
146
+ super().__init__()
147
+ inv_freq = 1. / (base ** (torch.arange(0, dim, 2).float() / dim))
148
+ inv_freq = inv_freq.half()
149
+ self.learnable = learnable
150
+ if learnable:
151
+ self.inv_freq = torch.nn.Parameter(inv_freq)
152
+ self.max_seq_len_cached = None
153
+ else:
154
+ self.register_buffer('inv_freq', inv_freq)
155
+ self.max_seq_len_cached = None
156
+ self.cos_cached = None
157
+ self.sin_cached = None
158
+ self.precision = precision
159
+
160
+ def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys,
161
+ error_msgs):
162
+ pass
163
+
164
+ def forward(self, x, seq_dim=1, seq_len=None):
165
+ if seq_len is None:
166
+ seq_len = x.shape[seq_dim]
167
+ if self.max_seq_len_cached is None or (seq_len > self.max_seq_len_cached):
168
+ self.max_seq_len_cached = None if self.learnable else seq_len
169
+ t = torch.arange(seq_len, device=x.device, dtype=self.inv_freq.dtype)
170
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
171
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
172
+ emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
173
+ if self.precision == torch.bfloat16:
174
+ emb = emb.float()
175
+
176
+ # [sx, 1 (b * np), hn]
177
+ cos_cached = emb.cos()[:, None, :]
178
+ sin_cached = emb.sin()[:, None, :]
179
+ if self.precision == torch.bfloat16:
180
+ cos_cached = cos_cached.bfloat16()
181
+ sin_cached = sin_cached.bfloat16()
182
+ if self.learnable:
183
+ return cos_cached, sin_cached
184
+ self.cos_cached, self.sin_cached = cos_cached, sin_cached
185
+ return self.cos_cached[:seq_len, ...], self.sin_cached[:seq_len, ...]
186
+
187
+
188
+ def rotate_half(x):
189
+ x1, x2 = x[..., :x.shape[-1] // 2], x[..., x.shape[-1] // 2:]
190
+ return torch.cat((-x2, x1), dim=x1.ndim - 1) # dim=-1 triggers a bug in earlier torch versions
191
+
192
+
193
+ @torch.jit.script
194
+ def apply_rotary_pos_emb_index(q, k, cos, sin, position_id):
195
+ # position_id: [sq, b], q, k: [sq, b, np, hn], cos: [sq, 1, hn] -> [sq, b, 1, hn]
196
+ cos, sin = F.embedding(position_id, cos.squeeze(1)).unsqueeze(2), \
197
+ F.embedding(position_id, sin.squeeze(1)).unsqueeze(2)
198
+ q, k = (q * cos) + (rotate_half(q) * sin), (k * cos) + (rotate_half(k) * sin)
199
+ return q, k
200
+
201
+
202
+ def attention_fn(
203
+ self,
204
+ query_layer,
205
+ key_layer,
206
+ value_layer,
207
+ attention_mask,
208
+ hidden_size_per_partition,
209
+ layer_id,
210
+ layer_past=None,
211
+ scaling_attention_score=True,
212
+ use_cache=False,
213
+ ):
214
+ if layer_past is not None:
215
+ past_key, past_value = layer_past
216
+ key_layer = torch.cat((past_key, key_layer), dim=0)
217
+ value_layer = torch.cat((past_value, value_layer), dim=0)
218
+
219
+ # seqlen, batch, num_attention_heads, hidden_size_per_attention_head
220
+ seq_len, b, nh, hidden_size = key_layer.shape
221
+
222
+ if use_cache:
223
+ present = (key_layer, value_layer)
224
+ else:
225
+ present = None
226
+
227
+ query_key_layer_scaling_coeff = float(layer_id + 1)
228
+ if scaling_attention_score:
229
+ query_layer = query_layer / (math.sqrt(hidden_size) * query_key_layer_scaling_coeff)
230
+
231
+ # ===================================
232
+ # Raw attention scores. [b, np, s, s]
233
+ # ===================================
234
+
235
+ # [b, np, sq, sk]
236
+ output_size = (query_layer.size(1), query_layer.size(2), query_layer.size(0), key_layer.size(0))
237
+
238
+ # [sq, b, np, hn] -> [sq, b * np, hn]
239
+ query_layer = query_layer.view(output_size[2], output_size[0] * output_size[1], -1)
240
+ # [sk, b, np, hn] -> [sk, b * np, hn]
241
+ key_layer = key_layer.view(output_size[3], output_size[0] * output_size[1], -1)
242
+
243
+ matmul_result = torch.empty(
244
+ output_size[0] * output_size[1],
245
+ output_size[2],
246
+ output_size[3],
247
+ dtype=query_layer.dtype,
248
+ device=query_layer.device,
249
+ )
250
+
251
+ matmul_result = torch.baddbmm(
252
+ matmul_result,
253
+ query_layer.transpose(0, 1), # [b * np, sq, hn]
254
+ key_layer.transpose(0, 1).transpose(1, 2), # [b * np, hn, sk]
255
+ beta=0.0,
256
+ alpha=1.0,
257
+ )
258
+
259
+ # change view to [b, np, sq, sk]
260
+ attention_scores = matmul_result.view(*output_size)
261
+
262
+ if self.scale_mask_softmax:
263
+ self.scale_mask_softmax.scale = query_key_layer_scaling_coeff
264
+ attention_probs = self.scale_mask_softmax(attention_scores, attention_mask.contiguous())
265
+ else:
266
+ if not (attention_mask == 0).all():
267
+ # if auto-regressive, skip
268
+ attention_scores.masked_fill_(attention_mask, -10000.0)
269
+ dtype = attention_scores.type()
270
+ attention_scores = attention_scores.float()
271
+ attention_scores = attention_scores * query_key_layer_scaling_coeff
272
+
273
+ attention_probs = F.softmax(attention_scores, dim=-1)
274
+
275
+ attention_probs = attention_probs.type(dtype)
276
+
277
+ # =========================
278
+ # Context layer. [sq, b, hp]
279
+ # =========================
280
+
281
+ # value_layer -> context layer.
282
+ # [sk, b, np, hn] --> [b, np, sq, hn]
283
+
284
+ # context layer shape: [b, np, sq, hn]
285
+ output_size = (value_layer.size(1), value_layer.size(2), query_layer.size(0), value_layer.size(3))
286
+
287
+ # change view [sk, b * np, hn]
288
+ value_layer = value_layer.view(value_layer.size(0), output_size[0] * output_size[1], -1)
289
+
290
+ # change view [b * np, sq, sk]
291
+ attention_probs = attention_probs.view(output_size[0] * output_size[1], output_size[2], -1)
292
+
293
+ # matmul: [b * np, sq, hn]
294
+ context_layer = torch.bmm(attention_probs, value_layer.transpose(0, 1))
295
+
296
+ # change view [b, np, sq, hn]
297
+ context_layer = context_layer.view(*output_size)
298
+
299
+ # [b, np, sq, hn] --> [sq, b, np, hn]
300
+ context_layer = context_layer.permute(2, 0, 1, 3).contiguous()
301
+
302
+ # [sq, b, np, hn] --> [sq, b, hp]
303
+ new_context_layer_shape = context_layer.size()[:-2] + (hidden_size_per_partition,)
304
+ context_layer = context_layer.view(*new_context_layer_shape)
305
+
306
+ outputs = (context_layer, present, attention_probs)
307
+
308
+ return outputs
309
+
310
+
311
+ class SelfAttention(torch.nn.Module):
312
+ def __init__(self, hidden_size, num_attention_heads,
313
+ layer_id, hidden_size_per_attention_head=None, bias=True,
314
+ params_dtype=torch.float, position_encoding_2d=True):
315
+ super(SelfAttention, self).__init__()
316
+
317
+ self.layer_id = layer_id
318
+ self.hidden_size = hidden_size
319
+ self.hidden_size_per_partition = hidden_size
320
+ self.num_attention_heads = num_attention_heads
321
+ self.num_attention_heads_per_partition = num_attention_heads
322
+ self.position_encoding_2d = position_encoding_2d
323
+ self.rotary_emb = RotaryEmbedding(
324
+ self.hidden_size // (self.num_attention_heads * 2)
325
+ if position_encoding_2d
326
+ else self.hidden_size // self.num_attention_heads,
327
+ base=10000,
328
+ precision=torch.half,
329
+ learnable=False,
330
+ )
331
+
332
+ self.scale_mask_softmax = None
333
+
334
+ if hidden_size_per_attention_head is None:
335
+ self.hidden_size_per_attention_head = hidden_size // num_attention_heads
336
+ else:
337
+ self.hidden_size_per_attention_head = hidden_size_per_attention_head
338
+
339
+ self.inner_hidden_size = num_attention_heads * self.hidden_size_per_attention_head
340
+
341
+ # Strided linear layer.
342
+ self.query_key_value = skip_init(
343
+ torch.nn.Linear,
344
+ hidden_size,
345
+ 3 * self.inner_hidden_size,
346
+ bias=bias,
347
+ dtype=params_dtype,
348
+ )
349
+
350
+ self.dense = skip_init(
351
+ torch.nn.Linear,
352
+ self.inner_hidden_size,
353
+ hidden_size,
354
+ bias=bias,
355
+ dtype=params_dtype,
356
+ )
357
+
358
+ @staticmethod
359
+ def attention_mask_func(attention_scores, attention_mask):
360
+ attention_scores.masked_fill_(attention_mask, -10000.0)
361
+ return attention_scores
362
+
363
+ def split_tensor_along_last_dim(self, tensor, num_partitions,
364
+ contiguous_split_chunks=False):
365
+ """Split a tensor along its last dimension.
366
+ Arguments:
367
+ tensor: input tensor.
368
+ num_partitions: number of partitions to split the tensor
369
+ contiguous_split_chunks: If True, make each chunk contiguous
370
+ in memory.
371
+ """
372
+ # Get the size and dimension.
373
+ last_dim = tensor.dim() - 1
374
+ last_dim_size = tensor.size()[last_dim] // num_partitions
375
+ # Split.
376
+ tensor_list = torch.split(tensor, last_dim_size, dim=last_dim)
377
+ # Note: torch.split does not create contiguous tensors by default.
378
+ if contiguous_split_chunks:
379
+ return tuple(chunk.contiguous() for chunk in tensor_list)
380
+
381
+ return tensor_list
382
+
383
+ def forward(
384
+ self,
385
+ hidden_states: torch.Tensor,
386
+ position_ids,
387
+ attention_mask: torch.Tensor,
388
+ layer_id,
389
+ layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
390
+ use_cache: bool = False,
391
+ output_attentions: bool = False,
392
+ ):
393
+ """
394
+ hidden_states: [seq_len, batch, hidden_size]
395
+ attention_mask: [(1, 1), seq_len, seq_len]
396
+ """
397
+
398
+ # [seq_len, batch, 3 * hidden_size]
399
+ mixed_raw_layer = self.query_key_value(hidden_states)
400
+
401
+ # [seq_len, batch, 3 * hidden_size] --> [seq_len, batch, num_attention_heads, 3 * hidden_size_per_attention_head]
402
+ new_tensor_shape = mixed_raw_layer.size()[:-1] + (
403
+ self.num_attention_heads_per_partition,
404
+ 3 * self.hidden_size_per_attention_head,
405
+ )
406
+ mixed_raw_layer = mixed_raw_layer.view(*new_tensor_shape)
407
+
408
+ # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
409
+ (query_layer, key_layer, value_layer) = self.split_tensor_along_last_dim(mixed_raw_layer, 3)
410
+
411
+ if self.position_encoding_2d:
412
+ q1, q2 = query_layer.chunk(2, dim=(query_layer.ndim - 1))
413
+ k1, k2 = key_layer.chunk(2, dim=(key_layer.ndim - 1))
414
+ cos, sin = self.rotary_emb(q1, seq_len=position_ids.max() + 1)
415
+ position_ids, block_position_ids = position_ids[:, 0, :].transpose(0, 1).contiguous(), \
416
+ position_ids[:, 1, :].transpose(0, 1).contiguous()
417
+ q1, k1 = apply_rotary_pos_emb_index(q1, k1, cos, sin, position_ids)
418
+ q2, k2 = apply_rotary_pos_emb_index(q2, k2, cos, sin, block_position_ids)
419
+ query_layer = torch.concat([q1, q2], dim=(q1.ndim - 1))
420
+ key_layer = torch.concat([k1, k2], dim=(k1.ndim - 1))
421
+ else:
422
+ position_ids = position_ids.transpose(0, 1)
423
+ cos, sin = self.rotary_emb(value_layer, seq_len=position_ids.max() + 1)
424
+ # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
425
+ query_layer, key_layer = apply_rotary_pos_emb_index(query_layer, key_layer, cos, sin, position_ids)
426
+
427
+ # [seq_len, batch, hidden_size]
428
+ context_layer, present, attention_probs = attention_fn(
429
+ self=self,
430
+ query_layer=query_layer,
431
+ key_layer=key_layer,
432
+ value_layer=value_layer,
433
+ attention_mask=attention_mask,
434
+ hidden_size_per_partition=self.hidden_size_per_partition,
435
+ layer_id=layer_id,
436
+ layer_past=layer_past,
437
+ use_cache=use_cache
438
+ )
439
+
440
+ output = self.dense(context_layer)
441
+
442
+ outputs = (output, present)
443
+
444
+ if output_attentions:
445
+ outputs += (attention_probs,)
446
+
447
+ return outputs # output, present, attention_probs
448
+
449
+
450
+ class GEGLU(torch.nn.Module):
451
+ def __init__(self):
452
+ super().__init__()
453
+ self.activation_fn = F.gelu
454
+
455
+ def forward(self, x):
456
+ # dim=-1 breaks in jit for pt<1.10
457
+ x1, x2 = x.chunk(2, dim=(x.ndim - 1))
458
+ return x1 * self.activation_fn(x2)
459
+
460
+
461
+ class GLU(torch.nn.Module):
462
+ def __init__(self, hidden_size, inner_hidden_size=None,
463
+ layer_id=None, bias=True, activation_func=gelu, params_dtype=torch.float):
464
+ super(GLU, self).__init__()
465
+ self.layer_id = layer_id
466
+ self.activation_func = activation_func
467
+
468
+ # Project to 4h.
469
+ self.hidden_size = hidden_size
470
+ if inner_hidden_size is None:
471
+ inner_hidden_size = 4 * hidden_size
472
+ self.inner_hidden_size = inner_hidden_size
473
+ self.dense_h_to_4h = skip_init(
474
+ torch.nn.Linear,
475
+ self.hidden_size,
476
+ self.inner_hidden_size,
477
+ bias=bias,
478
+ dtype=params_dtype,
479
+ )
480
+ # Project back to h.
481
+ self.dense_4h_to_h = skip_init(
482
+ torch.nn.Linear,
483
+ self.inner_hidden_size,
484
+ self.hidden_size,
485
+ bias=bias,
486
+ dtype=params_dtype,
487
+ )
488
+
489
+ def forward(self, hidden_states):
490
+ """
491
+ hidden_states: [seq_len, batch, hidden_size]
492
+ """
493
+
494
+ # [seq_len, batch, inner_hidden_size]
495
+ intermediate_parallel = self.dense_h_to_4h(hidden_states)
496
+
497
+ intermediate_parallel = self.activation_func(intermediate_parallel)
498
+
499
+ output = self.dense_4h_to_h(intermediate_parallel)
500
+
501
+ return output
502
+
503
+
504
+ class GLMBlock(torch.nn.Module):
505
+ def __init__(
506
+ self,
507
+ hidden_size,
508
+ num_attention_heads,
509
+ layernorm_epsilon,
510
+ layer_id,
511
+ inner_hidden_size=None,
512
+ hidden_size_per_attention_head=None,
513
+ layernorm=LayerNorm,
514
+ use_bias=True,
515
+ params_dtype=torch.float,
516
+ num_layers=28,
517
+ position_encoding_2d=True
518
+ ):
519
+ super(GLMBlock, self).__init__()
520
+ # Set output layer initialization if not provided.
521
+
522
+ self.layer_id = layer_id
523
+
524
+ # Layernorm on the input data.
525
+ self.input_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)
526
+
527
+ self.position_encoding_2d = position_encoding_2d
528
+
529
+ # Self attention.
530
+ self.attention = SelfAttention(
531
+ hidden_size,
532
+ num_attention_heads,
533
+ layer_id,
534
+ hidden_size_per_attention_head=hidden_size_per_attention_head,
535
+ bias=use_bias,
536
+ params_dtype=params_dtype,
537
+ position_encoding_2d=self.position_encoding_2d
538
+ )
539
+
540
+ # Layernorm on the input data.
541
+ self.post_attention_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)
542
+
543
+ self.num_layers = num_layers
544
+
545
+ # GLU
546
+ self.mlp = GLU(
547
+ hidden_size,
548
+ inner_hidden_size=inner_hidden_size,
549
+ bias=use_bias,
550
+ layer_id=layer_id,
551
+ params_dtype=params_dtype,
552
+ )
553
+
554
+ def forward(
555
+ self,
556
+ hidden_states: torch.Tensor,
557
+ position_ids,
558
+ attention_mask: torch.Tensor,
559
+ layer_id,
560
+ layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
561
+ use_cache: bool = False,
562
+ output_attentions: bool = False,
563
+ ):
564
+ """
565
+ hidden_states: [seq_len, batch, hidden_size]
566
+ attention_mask: [(1, 1), seq_len, seq_len]
567
+ """
568
+
569
+ # Layer norm at the begining of the transformer layer.
570
+ # [seq_len, batch, hidden_size]
571
+ attention_input = self.input_layernorm(hidden_states)
572
+
573
+ # Self attention.
574
+ attention_outputs = self.attention(
575
+ attention_input,
576
+ position_ids,
577
+ attention_mask=attention_mask,
578
+ layer_id=layer_id,
579
+ layer_past=layer_past,
580
+ use_cache=use_cache,
581
+ output_attentions=output_attentions
582
+ )
583
+
584
+ attention_output = attention_outputs[0]
585
+
586
+ outputs = attention_outputs[1:]
587
+
588
+ # Residual connection.
589
+ alpha = (2 * self.num_layers) ** 0.5
590
+ hidden_states = attention_input * alpha + attention_output
591
+
592
+ mlp_input = self.post_attention_layernorm(hidden_states)
593
+
594
+ # MLP.
595
+ mlp_output = self.mlp(mlp_input)
596
+
597
+ # Second residual connection.
598
+ output = mlp_input * alpha + mlp_output
599
+
600
+ if use_cache:
601
+ outputs = (output,) + outputs
602
+ else:
603
+ outputs = (output,) + outputs[1:]
604
+
605
+ return outputs # hidden_states, present, attentions
606
+
607
+
608
+ class ChatGLMPreTrainedModel(PreTrainedModel):
609
+ """
610
+ An abstract class to handle weights initialization and
611
+ a simple interface for downloading and loading pretrained models.
612
+ """
613
+
614
+ is_parallelizable = False
615
+ supports_gradient_checkpointing = False
616
+ config_class = ChatGLMConfig
617
+ base_model_prefix = "transformer"
618
+ _no_split_modules = ["GLM6BBlock"]
619
+
620
+ def __init__(self, *inputs, **kwargs):
621
+ super().__init__(*inputs, **kwargs)
622
+
623
+ def _init_weights(self, module: nn.Module):
624
+ """Initialize the weights."""
625
+ return
626
+
627
+
628
+ CHATGLM_6B_START_DOCSTRING = r"""
629
+ This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class.
630
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general
631
+ usage and behavior.
632
+
633
+ Parameters:
634
+ config ([`~ChatGLM6BConfig`]): Model configuration class with all the parameters of the model.
635
+ Initializing with a config file does not load the weights associated with the model, only the configuration.
636
+ Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
637
+ """
638
+
639
+ CHATGLM_6B_INPUTS_DOCSTRING = r"""
640
+ Args:
641
+ input_ids (`torch.LongTensor` of shape `({0})`):
642
+ Indices of input sequence tokens in the vocabulary.
643
+
644
+ Indices can be obtained using [`ChatGLM6BTokenizer`].
645
+ See [`PreTrainedTokenizer.encode`] and
646
+ [`PreTrainedTokenizer.__call__`] for details.
647
+
648
+ [What are input IDs?](../glossary#input-ids)
649
+ attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*):
650
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
651
+
652
+ - 1 for tokens that are **not masked**,
653
+ - 0 for tokens that are **masked**.
654
+
655
+ [What are attention masks?](../glossary#attention-mask)
656
+ token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*):
657
+ Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
658
+
659
+ - 0 corresponds to a *sentence A* token,
660
+ - 1 corresponds to a *sentence B* token.
661
+
662
+ [What are token type IDs?](../glossary#token-type-ids)
663
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
664
+ Indices of positions of each input sequence tokens in the position embeddings.
665
+ Selected in the range `[0, config.max_position_embeddings - 1]`.
666
+
667
+ [What are position IDs?](../glossary#position-ids)
668
+ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
669
+ Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
670
+
671
+ - 1 indicates the head is **not masked**,
672
+ - 0 indicates the head is **masked**.
673
+
674
+ inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*):
675
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
676
+ This is useful if you want more control over how to convert *input_ids* indices into associated vectors
677
+ than the model's internal embedding lookup matrix.
678
+ output_attentions (`bool`, *optional*):
679
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
680
+ tensors for more detail.
681
+ output_hidden_states (`bool`, *optional*):
682
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
683
+ more detail.
684
+ return_dict (`bool`, *optional*):
685
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
686
+ """
687
+
688
+
689
+ @add_start_docstrings(
690
+ "The bare ChatGLM-6B Model transformer outputting raw hidden-states without any specific head on top.",
691
+ CHATGLM_6B_START_DOCSTRING,
692
+ )
693
+ class ChatGLMModel(ChatGLMPreTrainedModel):
694
+ """
695
+
696
+ The model can behave as an encoder (with only self-attention) as well
697
+ as a decoder, in which case a layer of cross-attention is added between
698
+ the self-attention layers, following the architecture described in [Attention is
699
+ all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani,
700
+ Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
701
+
702
+ To behave as an decoder the model needs to be initialized with the
703
+ `is_decoder` argument of the configuration set to `True`.
704
+ To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder`
705
+ argument and `add_cross_attention` set to `True`; an
706
+ `encoder_hidden_states` is then expected as an input to the forward pass.
707
+ """
708
+
709
+ def __init__(self, config: ChatGLMConfig):
710
+ super().__init__(config)
711
+
712
+ # recording parameters
713
+ self.max_sequence_length = config.max_sequence_length
714
+ self.hidden_size = config.hidden_size
715
+ self.params_dtype = torch.half
716
+ self.num_attention_heads = config.num_attention_heads
717
+ self.vocab_size = config.vocab_size
718
+ self.num_layers = config.num_layers
719
+ self.layernorm_epsilon = config.layernorm_epsilon
720
+ self.inner_hidden_size = config.inner_hidden_size
721
+ self.hidden_size_per_attention_head = self.hidden_size // self.num_attention_heads
722
+ self.position_encoding_2d = config.position_encoding_2d
723
+
724
+ self.word_embeddings = skip_init(
725
+ torch.nn.Embedding,
726
+ num_embeddings=self.vocab_size, embedding_dim=self.hidden_size,
727
+ dtype=self.params_dtype
728
+ )
729
+
730
+ def get_layer(layer_id):
731
+ return GLMBlock(
732
+ self.hidden_size,
733
+ self.num_attention_heads,
734
+ self.layernorm_epsilon,
735
+ layer_id,
736
+ inner_hidden_size=self.inner_hidden_size,
737
+ hidden_size_per_attention_head=self.hidden_size_per_attention_head,
738
+ layernorm=LayerNorm,
739
+ use_bias=True,
740
+ params_dtype=self.params_dtype,
741
+ position_encoding_2d=self.position_encoding_2d,
742
+ )
743
+
744
+ self.layers = torch.nn.ModuleList(
745
+ [get_layer(layer_id) for layer_id in range(self.num_layers)]
746
+ )
747
+
748
+ # Final layer norm before output.
749
+ self.final_layernorm = LayerNorm(self.hidden_size, eps=self.layernorm_epsilon)
750
+
751
+ def get_input_embeddings(self):
752
+ return self.word_embeddings
753
+
754
+ def set_input_embeddings(self, new_embeddings: torch.Tensor):
755
+ self.word_embeddings = new_embeddings
756
+
757
+ def get_masks(self, seq, device):
758
+ context_length = seq.index(self.config.bos_token_id) + 1
759
+
760
+ attention_mask = torch.ones((1, len(seq), len(seq)), device=device)
761
+ attention_mask.tril_()
762
+ attention_mask[..., :context_length - 1] = 1
763
+ attention_mask.unsqueeze_(1)
764
+ attention_mask = (attention_mask < 0.5).bool()
765
+
766
+ return attention_mask
767
+
768
+ def get_position_ids(self, seq, mask_position, device, gmask=False):
769
+ context_length = seq.index(self.config.bos_token_id) + 1
770
+ if self.position_encoding_2d:
771
+ seq_length = seq.index(self.config.bos_token_id)
772
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
773
+ if not gmask:
774
+ position_ids[seq_length:] = mask_position
775
+ block_position_ids = torch.cat((
776
+ torch.zeros(seq_length, dtype=torch.long, device=device),
777
+ torch.arange(context_length - seq_length, dtype=torch.long, device=device) + 1
778
+ ))
779
+ position_ids = torch.stack((position_ids, block_position_ids), dim=0)
780
+ else:
781
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
782
+ if not gmask:
783
+ position_ids[context_length - 1:] = mask_position
784
+
785
+ position_ids = position_ids.unsqueeze(0)
786
+
787
+ return position_ids
788
+
789
+ @add_start_docstrings_to_model_forward(CHATGLM_6B_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
790
+ @add_code_sample_docstrings(
791
+ checkpoint=_CHECKPOINT_FOR_DOC,
792
+ output_type=BaseModelOutputWithPastAndCrossAttentions,
793
+ config_class=_CONFIG_FOR_DOC,
794
+ )
795
+ def forward(
796
+ self,
797
+ input_ids: Optional[torch.LongTensor] = None,
798
+ position_ids: Optional[torch.LongTensor] = None,
799
+ attention_mask: Optional[torch.Tensor] = None,
800
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
801
+ inputs_embeds: Optional[torch.LongTensor] = None,
802
+ use_cache: Optional[bool] = None,
803
+ output_attentions: Optional[bool] = None,
804
+ output_hidden_states: Optional[bool] = None,
805
+ return_dict: Optional[bool] = None,
806
+ ) -> Union[Tuple[torch.Tensor, ...], BaseModelOutputWithPast]:
807
+
808
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
809
+ output_hidden_states = (
810
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
811
+ )
812
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
813
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
814
+
815
+ if input_ids is not None and inputs_embeds is not None:
816
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
817
+ elif input_ids is not None:
818
+ batch_size, seq_length = input_ids.shape[:2]
819
+ elif inputs_embeds is not None:
820
+ batch_size, seq_length, _ = inputs_embeds.shape[:2]
821
+ else:
822
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
823
+
824
+ if past_key_values is None:
825
+ past_key_values = tuple([None] * len(self.layers))
826
+ seq = input_ids[0].tolist()
827
+
828
+ if attention_mask is None:
829
+ attention_mask = self.get_masks(
830
+ seq=seq,
831
+ device=input_ids.device
832
+ )
833
+
834
+ if position_ids is None:
835
+ MASK, gMASK = 150000, 150001
836
+ mask_token = MASK if MASK in input_ids else gMASK
837
+ use_gmask = False if MASK in input_ids else gMASK
838
+
839
+ mask_position = seq.index(mask_token)
840
+ position_ids = self.get_position_ids(
841
+ seq=seq,
842
+ mask_position=mask_position,
843
+ device=input_ids.device,
844
+ gmask=use_gmask
845
+ )
846
+
847
+ if inputs_embeds is None:
848
+ inputs_embeds = self.word_embeddings(input_ids)
849
+
850
+ # [seq_len, batch, hidden_size]
851
+ hidden_states = inputs_embeds.transpose(0, 1)
852
+
853
+ presents = () if use_cache else None
854
+ all_self_attentions = () if output_attentions else None
855
+ all_hidden_states = () if output_hidden_states else None
856
+
857
+ seq_length_with_past = seq_length
858
+ past_key_values_length = 0
859
+ if past_key_values[0] is not None:
860
+ past_key_values_length = past_key_values[0][0].shape[0]
861
+ seq_length_with_past = seq_length_with_past + past_key_values_length
862
+ if attention_mask is None:
863
+ attention_mask = torch.zeros(1, 1, device=input_ids.device).bool()
864
+
865
+ else:
866
+ attention_mask = attention_mask.to(input_ids.device)
867
+
868
+ for i, layer in enumerate(self.layers):
869
+
870
+ if output_hidden_states:
871
+ all_hidden_states = all_hidden_states + (hidden_states,)
872
+
873
+ layer_ret = layer(
874
+ hidden_states,
875
+ position_ids=position_ids,
876
+ attention_mask=attention_mask,
877
+ layer_id=torch.tensor(i),
878
+ layer_past=past_key_values[i],
879
+ use_cache=use_cache,
880
+ output_attentions=output_attentions
881
+ )
882
+
883
+ hidden_states = layer_ret[0]
884
+
885
+ if use_cache:
886
+ presents = presents + (layer_ret[1],)
887
+
888
+ if output_attentions:
889
+ all_self_attentions = all_self_attentions + (layer_ret[2 if use_cache else 1],)
890
+
891
+ # Final layer norm.
892
+ hidden_states = self.final_layernorm(hidden_states)
893
+
894
+ if output_hidden_states:
895
+ all_hidden_states = all_hidden_states + (hidden_states,)
896
+
897
+ if not return_dict:
898
+ return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)
899
+
900
+ return BaseModelOutputWithPast(
901
+ last_hidden_state=hidden_states,
902
+ past_key_values=presents,
903
+ hidden_states=all_hidden_states,
904
+ attentions=all_self_attentions,
905
+ )
906
+
907
+
908
+ class ChatGLMForConditionalGeneration(ChatGLMPreTrainedModel):
909
+ def __init__(self, config):
910
+ super().__init__(config)
911
+
912
+ # self.hidden_size = config.hidden_size
913
+ # self.params_dtype = torch.half
914
+ # self.vocab_size = config.vocab_size
915
+ self.max_sequence_length = config.max_sequence_length
916
+
917
+ self.position_encoding_2d = config.position_encoding_2d
918
+
919
+ self.transformer = ChatGLMModel(config)
920
+
921
+ self.lm_head = skip_init(
922
+ nn.Linear,
923
+ config.hidden_size,
924
+ config.vocab_size,
925
+ bias=False,
926
+ dtype=torch.half
927
+ )
928
+
929
+ def get_output_embeddings(self):
930
+ return self.lm_head
931
+
932
+ def set_output_embeddings(self, new_embeddings):
933
+ self.lm_head = new_embeddings
934
+
935
+ def get_masks_and_position_ids(self, seq, mask_position, context_length, device, gmask=False):
936
+ attention_mask = torch.ones((1, context_length, context_length), device=device)
937
+ attention_mask.tril_()
938
+ attention_mask[..., :context_length - 1] = 1
939
+ attention_mask.unsqueeze_(1)
940
+ attention_mask = (attention_mask < 0.5).bool()
941
+
942
+ if self.position_encoding_2d:
943
+ seq_length = seq.index(self.config.bos_token_id)
944
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
945
+ if not gmask:
946
+ position_ids[seq_length:] = mask_position
947
+ block_position_ids = torch.cat((
948
+ torch.zeros(seq_length, dtype=torch.long, device=device),
949
+ torch.arange(context_length - seq_length, dtype=torch.long, device=device) + 1
950
+ ))
951
+ position_ids = torch.stack((position_ids, block_position_ids), dim=0)
952
+ else:
953
+ position_ids = torch.arange(context_length, dtype=torch.long, device=device)
954
+ if not gmask:
955
+ position_ids[context_length - 1:] = mask_position
956
+
957
+ position_ids = position_ids.unsqueeze(0)
958
+
959
+ return attention_mask, position_ids
960
+
961
+ def prepare_inputs_for_generation(
962
+ self,
963
+ input_ids: torch.LongTensor,
964
+ past: Optional[torch.Tensor] = None,
965
+ past_key_values: Optional[torch.Tensor] = None,
966
+ attention_mask: Optional[torch.Tensor] = None,
967
+ **kwargs
968
+ ) -> dict:
969
+
970
+ MASK, gMASK = 150000, 150001
971
+ mask_token = MASK if MASK in input_ids else gMASK
972
+ use_gmask = False if MASK in input_ids else gMASK
973
+ seq = input_ids[0].tolist()
974
+ mask_position = seq.index(mask_token)
975
+
976
+ if mask_token not in seq:
977
+ raise ValueError("You have to add either [MASK] or [gMASK] in your input")
978
+
979
+ # only last token for input_ids if past is not None
980
+ if past is not None or past_key_values is not None:
981
+ context_length = seq.index(self.config.bos_token_id)
982
+ last_token = input_ids[:, -1].unsqueeze(-1)
983
+ if self.position_encoding_2d:
984
+ position_ids = torch.tensor([[[mask_position], [len(seq) - context_length]]], dtype=torch.long,
985
+ device=input_ids.device)
986
+ else:
987
+ position_ids = torch.tensor([[mask_position]], dtype=torch.long, device=input_ids.device)
988
+
989
+ if past is None:
990
+ past = past_key_values
991
+ return {
992
+ "input_ids": last_token,
993
+ "past_key_values": past,
994
+ "position_ids": position_ids,
995
+ }
996
+ else:
997
+ attention_mask, position_ids = self.get_masks_and_position_ids(
998
+ seq=seq,
999
+ mask_position=mask_position,
1000
+ context_length=len(seq),
1001
+ device=input_ids.device,
1002
+ gmask=use_gmask
1003
+ )
1004
+
1005
+ return {
1006
+ "input_ids": input_ids,
1007
+ "past_key_values": past,
1008
+ "position_ids": position_ids,
1009
+ "attention_mask": attention_mask
1010
+ }
1011
+
1012
+ def forward(
1013
+ self,
1014
+ input_ids: Optional[torch.Tensor] = None,
1015
+ position_ids: Optional[torch.Tensor] = None,
1016
+ attention_mask: Optional[torch.Tensor] = None,
1017
+ past_key_values: Optional[Tuple[torch.FloatTensor]] = None,
1018
+ inputs_embeds: Optional[torch.Tensor] = None,
1019
+ labels: Optional[torch.Tensor] = None,
1020
+ use_cache: Optional[bool] = None,
1021
+ output_attentions: Optional[bool] = None,
1022
+ output_hidden_states: Optional[bool] = None,
1023
+ return_dict: Optional[bool] = None,
1024
+ ):
1025
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1026
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1027
+
1028
+ transformer_outputs = self.transformer(
1029
+ input_ids=input_ids,
1030
+ position_ids=position_ids,
1031
+ attention_mask=attention_mask,
1032
+ past_key_values=past_key_values,
1033
+ inputs_embeds=inputs_embeds,
1034
+ use_cache=use_cache,
1035
+ output_attentions=output_attentions,
1036
+ output_hidden_states=output_hidden_states,
1037
+ return_dict=return_dict,
1038
+ )
1039
+
1040
+ hidden_states = transformer_outputs[0]
1041
+
1042
+ lm_logits = self.lm_head(hidden_states).permute(1, 0, 2).contiguous()
1043
+
1044
+ loss = None
1045
+ if labels is not None:
1046
+ lm_logits = lm_logits.to(torch.float32)
1047
+
1048
+ # Shift so that tokens < n predict n
1049
+ shift_logits = lm_logits[..., :-1, :].contiguous()
1050
+ shift_labels = labels[..., 1:].contiguous()
1051
+ # Flatten the tokens
1052
+ loss_fct = CrossEntropyLoss()
1053
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1054
+
1055
+ lm_logits = lm_logits.to(hidden_states.dtype)
1056
+ loss = loss.to(hidden_states.dtype)
1057
+
1058
+ if not return_dict:
1059
+ output = (lm_logits,) + transformer_outputs[1:]
1060
+ return ((loss,) + output) if loss is not None else output
1061
+
1062
+ return CausalLMOutputWithPast(
1063
+ loss=loss,
1064
+ logits=lm_logits,
1065
+ past_key_values=transformer_outputs.past_key_values,
1066
+ hidden_states=transformer_outputs.hidden_states,
1067
+ attentions=transformer_outputs.attentions,
1068
+ )
1069
+
1070
+ @staticmethod
1071
+ def _reorder_cache(
1072
+ past: Tuple[Tuple[torch.Tensor, torch.Tensor], ...], beam_idx: torch.LongTensor
1073
+ ) -> Tuple[Tuple[torch.Tensor, torch.Tensor], ...]:
1074
+ """
1075
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
1076
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
1077
+ beam_idx at every generation step.
1078
+
1079
+ Output shares the same memory storage as `past`.
1080
+ """
1081
+ return tuple(
1082
+ (
1083
+ layer_past[0].index_select(1, beam_idx.to(layer_past[0].device)),
1084
+ layer_past[1].index_select(1, beam_idx.to(layer_past[1].device)),
1085
+ )
1086
+ for layer_past in past
1087
+ )
1088
+
1089
+ def process_response(self, response):
1090
+ response = response.strip()
1091
+ response = response.replace("[[训练时间]]", "2023年")
1092
+ punkts = [
1093
+ [",", ","],
1094
+ ["!", "!"],
1095
+ [":", ":"],
1096
+ [";", ";"],
1097
+ ["\?", "?"],
1098
+ ]
1099
+ for item in punkts:
1100
+ response = re.sub(r"([\u4e00-\u9fff])%s" % item[0], r"\1%s" % item[1], response)
1101
+ response = re.sub(r"%s([\u4e00-\u9fff])" % item[0], r"%s\1" % item[1], response)
1102
+ return response
1103
+
1104
+ @torch.no_grad()
1105
+ def chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, max_length: int = 2048, num_beams=1,
1106
+ do_sample=True, top_p=0.7, temperature=0.95, logits_processor=None, **kwargs):
1107
+ if history is None:
1108
+ history = []
1109
+ if logits_processor is None:
1110
+ logits_processor = LogitsProcessorList()
1111
+ logits_processor.append(InvalidScoreLogitsProcessor())
1112
+ gen_kwargs = {"max_length": max_length, "num_beams": num_beams, "do_sample": do_sample, "top_p": top_p,
1113
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
1114
+ if not history:
1115
+ prompt = query
1116
+ else:
1117
+ prompt = ""
1118
+ for i, (old_query, response) in enumerate(history):
1119
+ prompt += "[Round {}]\n问:{}\n答:{}\n".format(i, old_query, response)
1120
+ prompt += "[Round {}]\n问:{}\n答:".format(len(history), query)
1121
+ input_ids = tokenizer([prompt], return_tensors="pt", padding=True)
1122
+ input_ids = input_ids.to(self.device)
1123
+ outputs = self.generate(**input_ids, **gen_kwargs)
1124
+ outputs = outputs.tolist()[0][len(input_ids["input_ids"][0]):]
1125
+ response = tokenizer.decode(outputs)
1126
+ response = self.process_response(response)
1127
+ history = history + [(query, response)]
1128
+ return response, history
1129
+
1130
+ @torch.no_grad()
1131
+ def stream_chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, max_length: int = 2048,
1132
+ do_sample=True, top_p=0.7, temperature=0.95, logits_processor=None, **kwargs):
1133
+ if history is None:
1134
+ history = []
1135
+ if logits_processor is None:
1136
+ logits_processor = LogitsProcessorList()
1137
+ logits_processor.append(InvalidScoreLogitsProcessor())
1138
+ gen_kwargs = {"max_length": max_length, "do_sample": do_sample, "top_p": top_p,
1139
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
1140
+ if not history:
1141
+ prompt = query
1142
+ else:
1143
+ prompt = ""
1144
+ for i, (old_query, response) in enumerate(history):
1145
+ prompt += "[Round {}]\n问:{}\n答:{}\n".format(i, old_query, response)
1146
+ prompt += "[Round {}]\n问:{}\n答:".format(len(history), query)
1147
+ input_ids = tokenizer([prompt], return_tensors="pt", padding=True)
1148
+ input_ids = input_ids.to(self.device)
1149
+ for outputs in self.stream_generate(**input_ids, **gen_kwargs):
1150
+ outputs = outputs.tolist()[0][len(input_ids["input_ids"][0]):]
1151
+ response = tokenizer.decode(outputs)
1152
+ response = self.process_response(response)
1153
+ new_history = history + [(query, response)]
1154
+ yield response, new_history
1155
+
1156
+ @torch.no_grad()
1157
+ def stream_generate(
1158
+ self,
1159
+ input_ids,
1160
+ generation_config: Optional[GenerationConfig] = None,
1161
+ logits_processor: Optional[LogitsProcessorList] = None,
1162
+ stopping_criteria: Optional[StoppingCriteriaList] = None,
1163
+ prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,
1164
+ **kwargs,
1165
+ ):
1166
+ batch_size, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
1167
+
1168
+ if generation_config is None:
1169
+ generation_config = self.generation_config
1170
+ generation_config = copy.deepcopy(generation_config)
1171
+ model_kwargs = generation_config.update(**kwargs)
1172
+ bos_token_id, eos_token_id = generation_config.bos_token_id, generation_config.eos_token_id
1173
+
1174
+ if isinstance(eos_token_id, int):
1175
+ eos_token_id = [eos_token_id]
1176
+
1177
+ has_default_max_length = kwargs.get("max_length") is None and generation_config.max_length is not None
1178
+ if has_default_max_length and generation_config.max_new_tokens is None:
1179
+ warnings.warn(
1180
+ f"Using `max_length`'s default ({generation_config.max_length}) to control the generation length. "
1181
+ "This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we"
1182
+ " recommend using `max_new_tokens` to control the maximum length of the generation.",
1183
+ UserWarning,
1184
+ )
1185
+ elif generation_config.max_new_tokens is not None:
1186
+ generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length
1187
+ if not has_default_max_length:
1188
+ logger.warn(
1189
+ f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(="
1190
+ f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. "
1191
+ "Please refer to the documentation for more information. "
1192
+ "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)",
1193
+ UserWarning,
1194
+ )
1195
+
1196
+ if input_ids_seq_length >= generation_config.max_length:
1197
+ input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
1198
+ logger.warning(
1199
+ f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to"
1200
+ f" {generation_config.max_length}. This can lead to unexpected behavior. You should consider"
1201
+ " increasing `max_new_tokens`."
1202
+ )
1203
+
1204
+ # 2. Set generation parameters if not already defined
1205
+ logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
1206
+ stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
1207
+
1208
+ logits_processor = self._get_logits_processor(
1209
+ generation_config=generation_config,
1210
+ input_ids_seq_length=input_ids_seq_length,
1211
+ encoder_input_ids=input_ids,
1212
+ prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
1213
+ logits_processor=logits_processor,
1214
+ )
1215
+
1216
+ stopping_criteria = self._get_stopping_criteria(
1217
+ generation_config=generation_config, stopping_criteria=stopping_criteria
1218
+ )
1219
+ logits_warper = self._get_logits_warper(generation_config)
1220
+
1221
+ unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
1222
+ scores = None
1223
+ while True:
1224
+ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
1225
+ # forward pass to get next token
1226
+ outputs = self(
1227
+ **model_inputs,
1228
+ return_dict=True,
1229
+ output_attentions=False,
1230
+ output_hidden_states=False,
1231
+ )
1232
+
1233
+ next_token_logits = outputs.logits[:, -1, :]
1234
+
1235
+ # pre-process distribution
1236
+ next_token_scores = logits_processor(input_ids, next_token_logits)
1237
+ next_token_scores = logits_warper(input_ids, next_token_scores)
1238
+
1239
+ # sample
1240
+ probs = nn.functional.softmax(next_token_scores, dim=-1)
1241
+ if generation_config.do_sample:
1242
+ next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
1243
+ else:
1244
+ next_tokens = torch.argmax(probs, dim=-1)
1245
+
1246
+ # update generated ids, model inputs, and length for next step
1247
+ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
1248
+ model_kwargs = self._update_model_kwargs_for_generation(
1249
+ outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
1250
+ )
1251
+ unfinished_sequences = unfinished_sequences.mul((sum(next_tokens != i for i in eos_token_id)).long())
1252
+
1253
+ # stop when each sentence is finished, or if we exceed the maximum length
1254
+ if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
1255
+ break
1256
+ yield input_ids
1257
+
1258
+ def quantize(self, bits: int):
1259
+ from .quantization import quantize
1260
+ self.transformer = quantize(self.transformer, bits)
1261
+ return self
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 13744473856
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00008-of-00008.bin",
7
+ "transformer.final_layernorm.bias": "pytorch_model-00007-of-00008.bin",
8
+ "transformer.final_layernorm.weight": "pytorch_model-00007-of-00008.bin",
9
+ "transformer.layers.0.attention.dense.bias": "pytorch_model-00001-of-00008.bin",
10
+ "transformer.layers.0.attention.dense.weight": "pytorch_model-00001-of-00008.bin",
11
+ "transformer.layers.0.attention.query_key_value.bias": "pytorch_model-00001-of-00008.bin",
12
+ "transformer.layers.0.attention.query_key_value.weight": "pytorch_model-00001-of-00008.bin",
13
+ "transformer.layers.0.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00008.bin",
14
+ "transformer.layers.0.input_layernorm.bias": "pytorch_model-00001-of-00008.bin",
15
+ "transformer.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00008.bin",
16
+ "transformer.layers.0.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00008.bin",
17
+ "transformer.layers.0.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00008.bin",
18
+ "transformer.layers.0.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00008.bin",
19
+ "transformer.layers.0.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00008.bin",
20
+ "transformer.layers.0.post_attention_layernorm.bias": "pytorch_model-00001-of-00008.bin",
21
+ "transformer.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00008.bin",
22
+ "transformer.layers.1.attention.dense.bias": "pytorch_model-00001-of-00008.bin",
23
+ "transformer.layers.1.attention.dense.weight": "pytorch_model-00001-of-00008.bin",
24
+ "transformer.layers.1.attention.query_key_value.bias": "pytorch_model-00001-of-00008.bin",
25
+ "transformer.layers.1.attention.query_key_value.weight": "pytorch_model-00001-of-00008.bin",
26
+ "transformer.layers.1.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00008.bin",
27
+ "transformer.layers.1.input_layernorm.bias": "pytorch_model-00001-of-00008.bin",
28
+ "transformer.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00008.bin",
29
+ "transformer.layers.1.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
30
+ "transformer.layers.1.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
31
+ "transformer.layers.1.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00008.bin",
32
+ "transformer.layers.1.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00008.bin",
33
+ "transformer.layers.1.post_attention_layernorm.bias": "pytorch_model-00001-of-00008.bin",
34
+ "transformer.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00008.bin",
35
+ "transformer.layers.10.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
36
+ "transformer.layers.10.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
37
+ "transformer.layers.10.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
38
+ "transformer.layers.10.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
39
+ "transformer.layers.10.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
40
+ "transformer.layers.10.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
41
+ "transformer.layers.10.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
42
+ "transformer.layers.10.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
43
+ "transformer.layers.10.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
44
+ "transformer.layers.10.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
45
+ "transformer.layers.10.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
46
+ "transformer.layers.10.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
47
+ "transformer.layers.10.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
48
+ "transformer.layers.11.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
49
+ "transformer.layers.11.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
50
+ "transformer.layers.11.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
51
+ "transformer.layers.11.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
52
+ "transformer.layers.11.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
53
+ "transformer.layers.11.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
54
+ "transformer.layers.11.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
55
+ "transformer.layers.11.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
56
+ "transformer.layers.11.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
57
+ "transformer.layers.11.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
58
+ "transformer.layers.11.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
59
+ "transformer.layers.11.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
60
+ "transformer.layers.11.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
61
+ "transformer.layers.12.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
62
+ "transformer.layers.12.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
63
+ "transformer.layers.12.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
64
+ "transformer.layers.12.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
65
+ "transformer.layers.12.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
66
+ "transformer.layers.12.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
67
+ "transformer.layers.12.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
68
+ "transformer.layers.12.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
69
+ "transformer.layers.12.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
70
+ "transformer.layers.12.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
71
+ "transformer.layers.12.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
72
+ "transformer.layers.12.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
73
+ "transformer.layers.12.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
74
+ "transformer.layers.13.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
75
+ "transformer.layers.13.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
76
+ "transformer.layers.13.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
77
+ "transformer.layers.13.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
78
+ "transformer.layers.13.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
79
+ "transformer.layers.13.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
80
+ "transformer.layers.13.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
81
+ "transformer.layers.13.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
82
+ "transformer.layers.13.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
83
+ "transformer.layers.13.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
84
+ "transformer.layers.13.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
85
+ "transformer.layers.13.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
86
+ "transformer.layers.13.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
87
+ "transformer.layers.14.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
88
+ "transformer.layers.14.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
89
+ "transformer.layers.14.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
90
+ "transformer.layers.14.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
91
+ "transformer.layers.14.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
92
+ "transformer.layers.14.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
93
+ "transformer.layers.14.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
94
+ "transformer.layers.14.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
95
+ "transformer.layers.14.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
96
+ "transformer.layers.14.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
97
+ "transformer.layers.14.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
98
+ "transformer.layers.14.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
99
+ "transformer.layers.14.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
100
+ "transformer.layers.15.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
101
+ "transformer.layers.15.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
102
+ "transformer.layers.15.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
103
+ "transformer.layers.15.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
104
+ "transformer.layers.15.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
105
+ "transformer.layers.15.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
106
+ "transformer.layers.15.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
107
+ "transformer.layers.15.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
108
+ "transformer.layers.15.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
109
+ "transformer.layers.15.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
110
+ "transformer.layers.15.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
111
+ "transformer.layers.15.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
112
+ "transformer.layers.15.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
113
+ "transformer.layers.16.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
114
+ "transformer.layers.16.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
115
+ "transformer.layers.16.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
116
+ "transformer.layers.16.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
117
+ "transformer.layers.16.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
118
+ "transformer.layers.16.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
119
+ "transformer.layers.16.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
120
+ "transformer.layers.16.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
121
+ "transformer.layers.16.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
122
+ "transformer.layers.16.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
123
+ "transformer.layers.16.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
124
+ "transformer.layers.16.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
125
+ "transformer.layers.16.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
126
+ "transformer.layers.17.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
127
+ "transformer.layers.17.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
128
+ "transformer.layers.17.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
129
+ "transformer.layers.17.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
130
+ "transformer.layers.17.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
131
+ "transformer.layers.17.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
132
+ "transformer.layers.17.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
133
+ "transformer.layers.17.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
134
+ "transformer.layers.17.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
135
+ "transformer.layers.17.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
136
+ "transformer.layers.17.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
137
+ "transformer.layers.17.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
138
+ "transformer.layers.17.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
139
+ "transformer.layers.18.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
140
+ "transformer.layers.18.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
141
+ "transformer.layers.18.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
142
+ "transformer.layers.18.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
143
+ "transformer.layers.18.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
144
+ "transformer.layers.18.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
145
+ "transformer.layers.18.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
146
+ "transformer.layers.18.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
147
+ "transformer.layers.18.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
148
+ "transformer.layers.18.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
149
+ "transformer.layers.18.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
150
+ "transformer.layers.18.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
151
+ "transformer.layers.18.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
152
+ "transformer.layers.19.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
153
+ "transformer.layers.19.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
154
+ "transformer.layers.19.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
155
+ "transformer.layers.19.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
156
+ "transformer.layers.19.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
157
+ "transformer.layers.19.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
158
+ "transformer.layers.19.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
159
+ "transformer.layers.19.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
160
+ "transformer.layers.19.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
161
+ "transformer.layers.19.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
162
+ "transformer.layers.19.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
163
+ "transformer.layers.19.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
164
+ "transformer.layers.19.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
165
+ "transformer.layers.2.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
166
+ "transformer.layers.2.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
167
+ "transformer.layers.2.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
168
+ "transformer.layers.2.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
169
+ "transformer.layers.2.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
170
+ "transformer.layers.2.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
171
+ "transformer.layers.2.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
172
+ "transformer.layers.2.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
173
+ "transformer.layers.2.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
174
+ "transformer.layers.2.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
175
+ "transformer.layers.2.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
176
+ "transformer.layers.2.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
177
+ "transformer.layers.2.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
178
+ "transformer.layers.20.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
179
+ "transformer.layers.20.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
180
+ "transformer.layers.20.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
181
+ "transformer.layers.20.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
182
+ "transformer.layers.20.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
183
+ "transformer.layers.20.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
184
+ "transformer.layers.20.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
185
+ "transformer.layers.20.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
186
+ "transformer.layers.20.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
187
+ "transformer.layers.20.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
188
+ "transformer.layers.20.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
189
+ "transformer.layers.20.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
190
+ "transformer.layers.20.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
191
+ "transformer.layers.21.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
192
+ "transformer.layers.21.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
193
+ "transformer.layers.21.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
194
+ "transformer.layers.21.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
195
+ "transformer.layers.21.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
196
+ "transformer.layers.21.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
197
+ "transformer.layers.21.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
198
+ "transformer.layers.21.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
199
+ "transformer.layers.21.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
200
+ "transformer.layers.21.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
201
+ "transformer.layers.21.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
202
+ "transformer.layers.21.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
203
+ "transformer.layers.21.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
204
+ "transformer.layers.22.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
205
+ "transformer.layers.22.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
206
+ "transformer.layers.22.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
207
+ "transformer.layers.22.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
208
+ "transformer.layers.22.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
209
+ "transformer.layers.22.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
210
+ "transformer.layers.22.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
211
+ "transformer.layers.22.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
212
+ "transformer.layers.22.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
213
+ "transformer.layers.22.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
214
+ "transformer.layers.22.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
215
+ "transformer.layers.22.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
216
+ "transformer.layers.22.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
217
+ "transformer.layers.23.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
218
+ "transformer.layers.23.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
219
+ "transformer.layers.23.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
220
+ "transformer.layers.23.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
221
+ "transformer.layers.23.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
222
+ "transformer.layers.23.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
223
+ "transformer.layers.23.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
224
+ "transformer.layers.23.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
225
+ "transformer.layers.23.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
226
+ "transformer.layers.23.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
227
+ "transformer.layers.23.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
228
+ "transformer.layers.23.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
229
+ "transformer.layers.23.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
230
+ "transformer.layers.24.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
231
+ "transformer.layers.24.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
232
+ "transformer.layers.24.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
233
+ "transformer.layers.24.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
234
+ "transformer.layers.24.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
235
+ "transformer.layers.24.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
236
+ "transformer.layers.24.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
237
+ "transformer.layers.24.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
238
+ "transformer.layers.24.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
239
+ "transformer.layers.24.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
240
+ "transformer.layers.24.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
241
+ "transformer.layers.24.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
242
+ "transformer.layers.24.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
243
+ "transformer.layers.25.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
244
+ "transformer.layers.25.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
245
+ "transformer.layers.25.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
246
+ "transformer.layers.25.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
247
+ "transformer.layers.25.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
248
+ "transformer.layers.25.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
249
+ "transformer.layers.25.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
250
+ "transformer.layers.25.mlp.dense_4h_to_h.bias": "pytorch_model-00007-of-00008.bin",
251
+ "transformer.layers.25.mlp.dense_4h_to_h.weight": "pytorch_model-00007-of-00008.bin",
252
+ "transformer.layers.25.mlp.dense_h_to_4h.bias": "pytorch_model-00007-of-00008.bin",
253
+ "transformer.layers.25.mlp.dense_h_to_4h.weight": "pytorch_model-00007-of-00008.bin",
254
+ "transformer.layers.25.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
255
+ "transformer.layers.25.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
256
+ "transformer.layers.26.attention.dense.bias": "pytorch_model-00007-of-00008.bin",
257
+ "transformer.layers.26.attention.dense.weight": "pytorch_model-00007-of-00008.bin",
258
+ "transformer.layers.26.attention.query_key_value.bias": "pytorch_model-00007-of-00008.bin",
259
+ "transformer.layers.26.attention.query_key_value.weight": "pytorch_model-00007-of-00008.bin",
260
+ "transformer.layers.26.attention.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
261
+ "transformer.layers.26.input_layernorm.bias": "pytorch_model-00007-of-00008.bin",
262
+ "transformer.layers.26.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
263
+ "transformer.layers.26.mlp.dense_4h_to_h.bias": "pytorch_model-00007-of-00008.bin",
264
+ "transformer.layers.26.mlp.dense_4h_to_h.weight": "pytorch_model-00007-of-00008.bin",
265
+ "transformer.layers.26.mlp.dense_h_to_4h.bias": "pytorch_model-00007-of-00008.bin",
266
+ "transformer.layers.26.mlp.dense_h_to_4h.weight": "pytorch_model-00007-of-00008.bin",
267
+ "transformer.layers.26.post_attention_layernorm.bias": "pytorch_model-00007-of-00008.bin",
268
+ "transformer.layers.26.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
269
+ "transformer.layers.27.attention.dense.bias": "pytorch_model-00007-of-00008.bin",
270
+ "transformer.layers.27.attention.dense.weight": "pytorch_model-00007-of-00008.bin",
271
+ "transformer.layers.27.attention.query_key_value.bias": "pytorch_model-00007-of-00008.bin",
272
+ "transformer.layers.27.attention.query_key_value.weight": "pytorch_model-00007-of-00008.bin",
273
+ "transformer.layers.27.attention.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
274
+ "transformer.layers.27.input_layernorm.bias": "pytorch_model-00007-of-00008.bin",
275
+ "transformer.layers.27.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
276
+ "transformer.layers.27.mlp.dense_4h_to_h.bias": "pytorch_model-00007-of-00008.bin",
277
+ "transformer.layers.27.mlp.dense_4h_to_h.weight": "pytorch_model-00007-of-00008.bin",
278
+ "transformer.layers.27.mlp.dense_h_to_4h.bias": "pytorch_model-00007-of-00008.bin",
279
+ "transformer.layers.27.mlp.dense_h_to_4h.weight": "pytorch_model-00007-of-00008.bin",
280
+ "transformer.layers.27.post_attention_layernorm.bias": "pytorch_model-00007-of-00008.bin",
281
+ "transformer.layers.27.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
282
+ "transformer.layers.3.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
283
+ "transformer.layers.3.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
284
+ "transformer.layers.3.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
285
+ "transformer.layers.3.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
286
+ "transformer.layers.3.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
287
+ "transformer.layers.3.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
288
+ "transformer.layers.3.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
289
+ "transformer.layers.3.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
290
+ "transformer.layers.3.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
291
+ "transformer.layers.3.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
292
+ "transformer.layers.3.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
293
+ "transformer.layers.3.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
294
+ "transformer.layers.3.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
295
+ "transformer.layers.4.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
296
+ "transformer.layers.4.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
297
+ "transformer.layers.4.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
298
+ "transformer.layers.4.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
299
+ "transformer.layers.4.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
300
+ "transformer.layers.4.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
301
+ "transformer.layers.4.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
302
+ "transformer.layers.4.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
303
+ "transformer.layers.4.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
304
+ "transformer.layers.4.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
305
+ "transformer.layers.4.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
306
+ "transformer.layers.4.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
307
+ "transformer.layers.4.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
308
+ "transformer.layers.5.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
309
+ "transformer.layers.5.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
310
+ "transformer.layers.5.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
311
+ "transformer.layers.5.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
312
+ "transformer.layers.5.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
313
+ "transformer.layers.5.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
314
+ "transformer.layers.5.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
315
+ "transformer.layers.5.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
316
+ "transformer.layers.5.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
317
+ "transformer.layers.5.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
318
+ "transformer.layers.5.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
319
+ "transformer.layers.5.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
320
+ "transformer.layers.5.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
321
+ "transformer.layers.6.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
322
+ "transformer.layers.6.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
323
+ "transformer.layers.6.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
324
+ "transformer.layers.6.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
325
+ "transformer.layers.6.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
326
+ "transformer.layers.6.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
327
+ "transformer.layers.6.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
328
+ "transformer.layers.6.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
329
+ "transformer.layers.6.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
330
+ "transformer.layers.6.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
331
+ "transformer.layers.6.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
332
+ "transformer.layers.6.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
333
+ "transformer.layers.6.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
334
+ "transformer.layers.7.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
335
+ "transformer.layers.7.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
336
+ "transformer.layers.7.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
337
+ "transformer.layers.7.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
338
+ "transformer.layers.7.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
339
+ "transformer.layers.7.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
340
+ "transformer.layers.7.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
341
+ "transformer.layers.7.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
342
+ "transformer.layers.7.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
343
+ "transformer.layers.7.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
344
+ "transformer.layers.7.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
345
+ "transformer.layers.7.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
346
+ "transformer.layers.7.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
347
+ "transformer.layers.8.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
348
+ "transformer.layers.8.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
349
+ "transformer.layers.8.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
350
+ "transformer.layers.8.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
351
+ "transformer.layers.8.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
352
+ "transformer.layers.8.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
353
+ "transformer.layers.8.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
354
+ "transformer.layers.8.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
355
+ "transformer.layers.8.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
356
+ "transformer.layers.8.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
357
+ "transformer.layers.8.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
358
+ "transformer.layers.8.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
359
+ "transformer.layers.8.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
360
+ "transformer.layers.9.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
361
+ "transformer.layers.9.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
362
+ "transformer.layers.9.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
363
+ "transformer.layers.9.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
364
+ "transformer.layers.9.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
365
+ "transformer.layers.9.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
366
+ "transformer.layers.9.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
367
+ "transformer.layers.9.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
368
+ "transformer.layers.9.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
369
+ "transformer.layers.9.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
370
+ "transformer.layers.9.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
371
+ "transformer.layers.9.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
372
+ "transformer.layers.9.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
373
+ "transformer.word_embeddings.weight": "pytorch_model-00001-of-00008.bin"
374
+ }
375
+ }
quantization.py ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch.nn import Linear
2
+ from torch.nn.parameter import Parameter
3
+
4
+ import bz2
5
+ import torch
6
+ import base64
7
+ import ctypes
8
+
9
+ from typing import List
10
+ from cpm_kernels.kernels.base import LazyKernelCModule, KernelFunction, round_up
11
+
12
+
13
+ class W8A16Linear(torch.autograd.Function):
14
+ @staticmethod
15
+ def forward(ctx, inp: torch.Tensor, quant_w: torch.Tensor, scale_w: torch.Tensor, weight_bit_width):
16
+ ctx.inp_shape = inp.size()
17
+ ctx.weight_shape = quant_w.size()
18
+ ctx.weight_bit_width = weight_bit_width
19
+ out_features = quant_w.size(0)
20
+ inp = inp.contiguous().view(-1, inp.size(-1))
21
+ weight = extract_weight_to_half(quant_w, scale_w, weight_bit_width)
22
+ output = inp.mm(weight.t())
23
+ ctx.save_for_backward(inp, quant_w, scale_w)
24
+ return output.view(*(ctx.inp_shape[:-1] + (out_features,)))
25
+
26
+ @staticmethod
27
+ def backward(ctx, grad_output: torch.Tensor):
28
+ inp, quant_w, scale_w = ctx.saved_tensors
29
+ weight = extract_weight_to_half(quant_w, scale_w, ctx.weight_bit_width)
30
+ grad_output = grad_output.contiguous().view(-1, weight.size(0))
31
+ grad_input = grad_output.mm(weight)
32
+ grad_weight = grad_output.t().mm(inp)
33
+ return grad_input.view(ctx.inp_shape), grad_weight.view(ctx.weight_shape), None
34
+
35
+
36
+ class Kernel:
37
+ def __init__(self, code: bytes, function_names: List[str]):
38
+ self.code = code
39
+ self._function_names = function_names
40
+ self._cmodule = LazyKernelCModule(self.code)
41
+
42
+ for name in self._function_names:
43
+ setattr(self, name, KernelFunction(self._cmodule, name))
44
+
45
+
46
+ quantization_code = "$QlpoOTFBWSZTWU9yuJUAQHN//////////f/n/8/n///n//bt4dTidcVx8X3V9FV/92/v4B7/AD5FBQFAAAChSgKpFCFAFVSigUAAAEKhSgUUqgFBKigqVREQAABQBQIANDTTIGI00BkZBkNGE0A0BkBkGQGRkaNAaAGQNBoGgDIAAYIGTI0DQAQAaGmmQMRpoDIyDIaMJoBoDIDIMgMjI0aA0AMgaDQNAGQAAwQMmRoGgAgA0NNMgYjTQGRkGQ0YTQDQGQGQZAZGRo0BoAZA0GgaAMgABggZMjQNABABoaaZAxGmgMjIMhowmgGgMgMgyAyMjRoDQAyBoNA0AZAADBAyZGgaAAmqU1NEgJqnptU/Sn4jRR6J6epk2pqb1Q/SgAPUGgyNNGjQ2SBpoAZAAGg0NB6mgDIAAAAA2oaApSREBNAARhGiYEaEwU8pvImlP0k2aam1GaGqbFNM1MHpTwmkepmyU9R6nqPKekHqNNPUxNGhp6n6p6QaZ6o9TG1GMqcoV9ly6nRanHlq6zPNbnGZNi6HSug+2nPiZ13XcnFYZW+45W11CumhzYhchOJ2GLLV1OBjBjGf4TptOddTSOcVxhqYZMYwZXZZY00zI1paX5X9J+b+f4e+x43RXSxXPOdquiGpduatGyXneN696M9t4HU2eR5XX/kPhP261NTx3JO1Ow7LyuDmeo9a7d351T1ZxnvnrvYnrXv/hXxPCeuYx2XsNmO003eg9J3Z6U7b23meJ4ri01OdzTk9BNO96brz+qT5nuvvH3ds/G+m/JcG/F2XYuhXlvO+jP7U3XgrzPN/lr8Sf1n6j4j7jZs+s/T0tNaNNYzTs12rxjwztHlnire3Nzc3N1wuBwOBwXBvZfoHpD7rFmR99V5vj3aXza3xdBbXMalubTg/jIv5dfAi54Pdc75j4z412n3Npj3Ld/ENm7a3b/Cod6h/ret1/5vn/C+l+gdslMvgPSLJ8d8q+U66fevYn/tW1chleEtNTGlcHCbLRlq0tHzF5tsbbZZfHjjLgZu42XCuC3NrdjTasZGNzgxPIrGqp7r3p7L2p5XjnpPSmTd5XtzqnB6U87zzg1Ol0zd0zsLszxR6lkxp35u6/teL0L0W922cR7Lu1lpL9CsHirzuM2T+BgsyViT6LHcm0/Vr6U/7LGGyJeqTEjt0PHWhF5mCT7R9mtlDwriYv0Tyr/OxYt6qp5r0mPVT0608TqnqMZaarU2nFwrTzzlrs1ed7z1ux60wyr4ydCaTi3enW8x68x0zU7tXSlcmPSW1mGpWJMg4zmPC2lK96tp0OE80y4MfEvnZj8zGluR6b22ki1Ou9V2nCd9xovcPvcYMZYy0lvN60ScZ45vN6yeCeeXFb1lVjnnCar5fwXwE2bzJ4HI1XVPXfXZMm44GUsMpYsmLB65TuVdm0cl0b+i/wGNN66XjeV7zuPpHcnK/juhhjdfId5jMdE5nN0dGmmm2zZs2cexD5n9p/dY352XsvXHaZNWWsmmS1atjR452nYudzvqv2HMRyvNNnlMcDl3R2+yx2uVrBubTW9icHDVtbNXlZm7jma1rM4VurZZd2y6nUau7ZXZ7bVU+mnoOVxZGMrVmvX60605JwmzGZhhhjTWtaaaMaaGTGmNMZasY0iX8VMUl8eepaIrzGSpemWOQyZORk2bNpjUybMmxqYmknCGCFynutfksaZpjTNMaaatM0xsxcGR0sociNqxNSmhhR1ZJPbsn8qyF0t2qH6iYBclclalbtTTcHTDsPaX6rlnElph2Jyumumtynv2Kk8GI7rsvXbIcJgHJOSaSXnnGaI3m87RtVXJOZ/YtgdTE6Wpha6ZlE8ayXkef1fh602r2WwvfMXtMdLlkfnLFdYYwYso+bWqm7yJqHXZGw2nrS5ZanSYnWlxBxMF1V940K2wdrI7R6OYf7DGGamMmTSbRhlS45xmVOumF1EyPCmHrrN8wwZOOrdNtLeMtzFzDlWnfTBxMk2NaXIZHBYxYLD4w8yju0ao65Vz1OIXoS9dLanwCe1PWrYuWMqf1if1z2k2yYfKJ741PDgno1ZQ8DRqvUny3mNoWTzGO6m1DkrJI8JiR5cSd+vZdGOO8nrMoc5+NDUFsMSXaZJeNlMmGLtJsovOsUp7I9S5VojKxF6bTVEelXqlfJobQr3LozSh2Jk7VcrVMfhXqszGWMzNqGhqZY0OadxkyyMssKugZR0KNFXBHlqwmJgTE/BNVMk6ItJXZMR0H47GpXv/DMOvNkmVuaV1PRfEdxuqc7Hcd+ZV/zTLaRxWk0nl9CdCeM6mn5rstHIBcpiuwmUZXeq81DacHI2rmrZ5SuE5mOZd6LQrZg9mx32TprA8BMo5jKN6yLTCi3WzQaZSuhzTtM1fUTGVpG8Tw+KXI0tjEpiWxtLYynOlktSbVlaI5kxP8TDH8kx50xoxi5KcA4pcja8KWLRlO/Ks6q06ergnvm1ca3Tq8Uw7LTUsmWyctXPWmpitl/uvGcWTGXGuAXDfhqazGmjkxcJW5hMMMMpYsXl2TZYtVOddG3XCarUt6Ptq9CZXSNzyuRzqRZOjsxdBbFVz6OA5HI43r1jityVlVpVkxmOsyaYWE1NTGq1sOVh36mHMcxtSvcy70edG0ZGR3I1Go1GRlV7mWWo1G0ZGRqlvH40l7o4m5xMWLLLYyNjnqc8556mdPqLJ31n/1nWOncxzG1tizrHs/Z+d2vP/B/l8wdJ6rHUn2nbbDq4p6htFtYzMMMTaZis1K5GKzGNmxhmUx2DDlZ/qNnIx41xnaMfCZWYaZWtNLTNW8ND4Fw1MyZOCdM428suKG1ehW8TesOydg7J+YYcD4cYR+8dFK6M4E3HM9ZfRNNL+Sn6rsl4DsrDl2HpPCnfxjGXtbZtYys1ttlyJ4T+BvexjGWRjMszK4Jpc77D3GyuVD7q0+G8m9G+2+rGm7cOR2y7FdtY2XUYx/oNlfRYxhMYyYZkyyg55enna9Kt/FFi6GMMwYwdwxWgxGMLKYmUyGExTKMZkMFhkymKuh0NOBNnBu+23LdwDoZYYzGGMxtORaTU1pjTGWTTGGtMrNWUsyyTTLLG1qy2ZjbK2DBllWqxMtBMaYZQmcE7zvvRcTkclUwdkxTaSdyySt/7fpL+T1v516Ji97fwr5JbLu305zMn5+GMTTZ9F+y7ExwmGVfG44yxn3dLv6l5i+Wth1jCrDq21nW9LqvvDzz3Vf3LLH/O/32TJ/erx3bXftO4eF+G956D952K/An4NfvOpjFjExjevP/UmE0fIoZXx6/w6lX/no3D0bLt+ixjieBM6ksRd0yB4Lt2SwYNE+gd1detlZWUnpiZfGfFaK+4PyCa/v18V8X75pe9fLXzp7l3VjF76vWZmHwGz1IZNWT7b8yddJ4q5kyrVdfru6atWc7bVYztL9Jf4GXvT+Y8m9/YsXP6H018a8D4XVOqvfzqeR+6yZOD8dPv0+U7/q5Pl+2dNb0MjzGVH5p6MNQ7cOWvw62U9aHE8DprDek+McLyvDz+te+9Zhq5+YTruufMcWMabqysTmZVWjKPfnK0wyVcrsuhjZRdLkHNvD72b9abriOSGIxiLixMOoalNPXzy+wT/tf+U6HHONfsz+xe8ufHBdQWWGWLA9if0rsnmrxK5LvRZQeWsTCsrmOYy8VteVfuRfcVTtDLItLIsMYxZLdU/DbtSemxF6Z6Zo5WBXE4tFdCyVMMXMTEMZXVlS6Xec2T4e0tHsRcEuWshcJ2YsNF5rUx1E8ifCq6Z+ZP7qdCeu/aTwFd53l16/o0NOw6O3dLavP4Hbi4RdmuDk6DoYaninC0+o4uZjbJ7Rxeu0/FbuFg+q7DVS6fQe0rZ6NDGUNNU6DEqOaLTicKnYZMnBWruljQxoaS3dZhocDge0bSTyOvdAbG5hxe2xji7E/L55xX13wWNDi6HCekcFxfCPGxY0MXC+s7afWaMdDyjyr+o8Rudm/NabOZvdl274zH4f5XK9z6On1Pe/K5TdPAslg77BjuO6Y3eO7GqvOPG/stknp1leyvLL0Z7bl9I4noMvLkzytLhWYzrOZzLXCORe028rORzOg4N/L0HlMOQ3Pgmnbb6KczlabORpu980q37TBqRu0/p3PO6234Bl03Ynuz+9W7gnsEcmvYaYY3aMYY0wx3pYd+ujsXauWdaY5Xkbtl23fPzFHiDB/QMo0yFjBllYxTQYYyxkrwn7JufwJ/PfgJ+C83X69ni6zvXcnyXabv0ncbLwsceS+RNlyN2mnneJtX0ngYO0+e+0+UnA+Wch3ji8hj5an4h+i6XBySU4n+R0roVcbw5yvHrmr4Yw8Y7x6c+9POPYHI5HI5HI5HI5HGXGww4nE4nrVyOR8XeqPEO7PLOiukYa3Novk5hV4cdtYZLI93e+uxff2jRo0aNGjRo0aNG1bVtW1dy3m83m8+tQ5ZzHw3nObwOu8La9Rc1dtkdS8A3eTk823tnktXWlxN6Oixe06zrN70Isd9jiOgZFq9yfkPqP/SLhN2Myl8jDM43bl1nbcb4cO57jlh8Jow6pzXZdL4dyODTuuhu77FyO27DdwdRxmvO+O+3N2+BdqyTwLHVczDVY4UPE4O66/ZO2cx1LFzVdSXtF7G4HMbrauOHRw6c8FdZ5m9fHZHYZXfTlZquyynSyTTKke6vcffSD9pzPA/G7n7jxPmuhc1DHMynPMrGL6AdewYmwu5ko+UUyTwrMv27rPH1v1nGqd87+p6N6LU8k3NEng53xXyHS97+44OSg/sy/hn+Se6yfYNjW0/uTgP+PvWYzLMmjhcLB/gGpri6H83/84eUXWT6T9Hsv7785z/7z4icpW+zfXypuR7rx/gMdZb1/wC678pcs8/2a3mDitGHxl9mfPlll5MafWWqxk/eYuTDgcNMzDGWLWvsuglNxs53GtN6uWpktlW1tZZYcuinMMWmnNnJydze3b2Y1McBxrBkXw799izLMZZYyy0TkbsGM4p03S2uVu5s/XXUdSdec6smVxZYYGpVmT8A+8ajuEyV5FatkvVru2x6uxGXXbH4A+jvgP4GMYy3iPLXzq/6z65+E005ey+cwMZD3fZcqc6xpjTFjQ0P3U+e++cPYmTIwj0nrK5NPTfl3WvpfLtXDcb2HQMudYOxFXQBor4L4T6vrOauFctYXJQ++NUWmJe5bmx1jDiZS1dTqWxo4GR8jm3fttpmPHppk9PEyv4/y8/sO07XacOmcqc0x2Vi9BvNJvN5oW8x4mOsydpidRxMYJPx06m1bqPzq9KtK8sxXNXFodD/+MYYaJTLwOhc9brCsV18oOR1i4tXChyTkq4lf4y1Ke+9axjDHqs1mfBbMXuP4Hzi+X7t8vzv7bHerrUPgPCxhjre4fXdfLNtNM+Jd+Zdh8xd8wP87uNPoPgv4W7/5P2BuxfsMabNnMnza+54Pdi5U671GPZY8CehX8Voeoo7FHpkeEc6715FwHZrIrUrHaviPUbPZHND+IhczrP6FcYvhOZ0Di/ETt0OI+YwNWR9r7tpf6WDeZKZDB1+z2IthOl1mPyb5FluvEx9h9d0NnM0Y1XPFkWIsk1WotJ0PBMmkvjvQTd0e71tfeV+8r8lQ/tpzpsmxJ+InrI/dj2UajUajVTUajatRqNRtGo1Go1Go4wjeMpZFMVV9CHbofPraLsJ3JpWV2XOoanCuFky4y3PPNxucK2uKC1Lbdb1eo+m5XomN6HfeZsabHLHRX/K+offtNGGmHWctcVcG44MdSqsOLY9VzX+Zxfxn2HPdWTpzWvkrtJ8M5zorrKcquRytJ5N5DZmcaW02l76nWO+BqPXm1A2Ry/0q71dH/mqrqeFjkYxjEXtsX8qubTk67rGycyqsdm4tZx5D6D5hhi0waaWmiaMP81Yjii5qxPlPuU/GfTL1Y5E6Jyfiq63qTa39A4J0sOGDgO9WF9bOXl0XfPRbsY2bPNKPy1YrFYrFYmRhhlTIyMjJWJYZHXuCXI8OoXsvfljGLFicNifpp2XunoPiG1wtx3p1Tah+/DD66OnVtVXP9rKbVxOnL0tR/rHtqB5UDErUVcl11D4qqvjpOcxX7armUNJB3LpW6bxVvD08e8h3odKKvyCFZBdSh2FVcST9xV3n3T8t1j7Kr9qgrqXg+13Pt5U7JCvFXVIV1YG5lRhkVYZJYYDDD4KOIMoHCp26WS8GB7uBh2zIdgq/PKyInjV2STShuoapUdCpX1yTwqq/z1VvET7Kh5nVPkO8YyxjLt2MaaMmWTLQvx3qnzltnXW0p2jxgbEtSny/Osv8Y9pLMXYoHVPAhkVdWVeODhR6q9/Sxe2liwwZWMVvFXfRkeIDxAePUPIrdJ4ey6yquzH+PD/bUOWAu05qVHtFd8rrKHSoeNIOUqrYr3FXyToqfYJgwmJdKpXXOwYYegNNGMzfZPp/t3t/DVs4zjNTN61rRqaWaa4NYbRjTa0tWwy2Y2tGN8ZO8ofNKq4j9SL7I+cSm4/6ovLV5HNXLI0jJidwrtk6ynCaP6Z++GjRlWS3tLeW129Mi9evxU9mtz6s5J3Z7M2ngTgnKvmpomxpaLCzPfmx0JWE+m3NLDDGOX47RctdYYNK5jakdqLkRlI39n590T5zctGSwwZZDJj6kW8XSi6ot2MmWWJ0DUT3nuvebBudScjZ79g8cWJ8av0k+/bE5WKd5MdbFpbDVMxu1DVMmtNZGJvq1mtRbn6M+g/kP0FwDwr7quZs7xosNGpbscyxhhd9TyJyFwbLcxlTasg75vW7TsV5K7ji44XPMMrdoj+Y3rT0Hie62nlYV/pwczzOmdLqLhYkzGMzCZWGMQzGMSsZYY6Di1t4nlJ+Em63mJxrVLxPbYxNEdgc1dU2iOKyoYYWjNrEeHTYybVk0atSa7ehuwsWMWTqn1TrnS6hYsi71d1+s+k+ic70e20fzE/VaTdxT9ZtU4GIXdeNx3X77guYYfpHeTQjaMX6brOu4OY4K7Y2d9mbHarI5ox3p4GpJ2Vd/Tst60f7j999pppjR+Q/Qf8J/VaORs3cji7FfFuN61+ui9s8hix1OCh5KGVV23BPXvZfz3CLyHpix+exi8z/KnCnosY2eunor+cxyPO/xJ0vKey9OvE9VjqaYu0x3Z3jd6o2b1T12D+F8l232lwaaacD5LE8LBxu7WTlbWraWpew8Xexjel3E+wWD4APITdNqR8F3R3T0lunCQ4GaE9R37DxeCYfcHi4xci5ovKfxVs55y2hf+65E/Xdp6jR5nrebTmi5incpkyOjs50JvrZwstbbW6kfuuQw+2mykf/EXNFzxfKTrxew929TR6bWnGL//F3JFOFCQT3K4lQ"
47
+
48
+ kernels = Kernel(
49
+ bz2.decompress(base64.b64decode(quantization_code)),
50
+ [
51
+ "int4WeightCompression",
52
+ "int4WeightExtractionFloat",
53
+ "int4WeightExtractionHalf",
54
+ "int8WeightExtractionFloat",
55
+ "int8WeightExtractionHalf",
56
+ ],
57
+ )
58
+
59
+
60
+ def compress_int4_weight(weight: torch.Tensor): # (n, m)
61
+ with torch.cuda.device(weight.device):
62
+ n, m = weight.size(0), weight.size(1)
63
+ assert m % 2 == 0
64
+ m = m // 2
65
+ out = torch.empty(n, m, dtype=torch.int8, device="cuda")
66
+ stream = torch.cuda.current_stream()
67
+
68
+ gridDim = (n, 1, 1)
69
+ blockDim = (min(round_up(m, 32), 1024), 1, 1)
70
+
71
+ kernels.int4WeightCompression(
72
+ gridDim,
73
+ blockDim,
74
+ 0,
75
+ stream,
76
+ [ctypes.c_void_p(weight.data_ptr()), ctypes.c_void_p(out.data_ptr()), ctypes.c_int32(n), ctypes.c_int32(m)],
77
+ )
78
+ return out
79
+
80
+
81
+ def extract_weight_to_half(weight: torch.Tensor, scale_list: torch.Tensor, source_bit_width: int):
82
+ if source_bit_width == 8:
83
+ func = kernels.int8WeightExtractionHalf
84
+ elif source_bit_width == 4:
85
+ func = kernels.int4WeightExtractionHalf
86
+ else:
87
+ assert False, "Unsupported bit-width"
88
+
89
+ with torch.cuda.device(weight.device):
90
+ n, m = weight.size(0), weight.size(1)
91
+ out = torch.empty(n, m * (8 // source_bit_width), dtype=torch.half, device="cuda")
92
+ stream = torch.cuda.current_stream()
93
+
94
+ gridDim = (n, 1, 1)
95
+ blockDim = (min(round_up(m, 32), 1024), 1, 1)
96
+
97
+ func(
98
+ gridDim,
99
+ blockDim,
100
+ 0,
101
+ stream,
102
+ [
103
+ ctypes.c_void_p(weight.data_ptr()),
104
+ ctypes.c_void_p(scale_list.data_ptr()),
105
+ ctypes.c_void_p(out.data_ptr()),
106
+ ctypes.c_int32(n),
107
+ ctypes.c_int32(m),
108
+ ],
109
+ )
110
+ return out
111
+
112
+
113
+ class QuantizedLinear(Linear):
114
+ def __init__(self, weight_bit_width: int, weight_tensor=None, bias_tensor=None, *args, **kwargs):
115
+ super(QuantizedLinear, self).__init__(*args, **kwargs)
116
+ self.weight_bit_width = weight_bit_width
117
+
118
+ shape = self.weight.shape
119
+ del self.weight
120
+
121
+ if weight_tensor is None:
122
+ self.weight = torch.empty(
123
+ shape[0], shape[1] * weight_bit_width // 8, dtype=torch.int8, device=kwargs["device"]
124
+ )
125
+ self.weight_scale = torch.empty(shape[0], dtype=kwargs["params_dtype"], device=kwargs["device"])
126
+ else:
127
+ self.weight_scale = (weight_tensor.abs().max(dim=-1).values / ((2 ** (weight_bit_width - 1)) - 1)).half()
128
+ self.weight = torch.round(weight_tensor / self.weight_scale[:, None]).to(torch.int8)
129
+ if weight_bit_width == 4:
130
+ self.weight = compress_int4_weight(self.weight)
131
+
132
+ self.weight = Parameter(self.weight.to(kwargs["device"]), requires_grad=False)
133
+ self.weight_scale = Parameter(self.weight_scale.to(kwargs["device"]), requires_grad=False)
134
+ self.bias = Parameter(bias_tensor.to(kwargs["device"]), requires_grad=False)
135
+
136
+ def forward(self, input):
137
+ output = W8A16Linear.apply(input, self.weight, self.weight_scale, self.weight_bit_width)
138
+ if self.bias is not None:
139
+ output = output + self.bias
140
+ return output
141
+
142
+
143
+ def quantize(model, weight_bit_width):
144
+ """Replace fp16 linear with quantized linear"""
145
+
146
+ for layer in model.layers:
147
+ layer.attention.query_key_value = QuantizedLinear(
148
+ weight_bit_width=weight_bit_width,
149
+ weight_tensor=layer.attention.query_key_value.weight.to(torch.cuda.current_device()),
150
+ bias_tensor=layer.attention.query_key_value.bias,
151
+ in_features=layer.attention.query_key_value.in_features,
152
+ out_features=layer.attention.query_key_value.out_features,
153
+ bias=True,
154
+ dtype=torch.half,
155
+ device=layer.attention.query_key_value.weight.device,
156
+ )
157
+ layer.attention.dense = QuantizedLinear(
158
+ weight_bit_width=weight_bit_width,
159
+ weight_tensor=layer.attention.dense.weight.to(torch.cuda.current_device()),
160
+ bias_tensor=layer.attention.dense.bias,
161
+ in_features=layer.attention.dense.in_features,
162
+ out_features=layer.attention.dense.out_features,
163
+ bias=True,
164
+ dtype=torch.half,
165
+ device=layer.attention.dense.weight.device,
166
+ )
167
+ layer.mlp.dense_h_to_4h = QuantizedLinear(
168
+ weight_bit_width=weight_bit_width,
169
+ weight_tensor=layer.mlp.dense_h_to_4h.weight.to(torch.cuda.current_device()),
170
+ bias_tensor=layer.mlp.dense_h_to_4h.bias,
171
+ in_features=layer.mlp.dense_h_to_4h.in_features,
172
+ out_features=layer.mlp.dense_h_to_4h.out_features,
173
+ bias=True,
174
+ dtype=torch.half,
175
+ device=layer.mlp.dense_h_to_4h.weight.device,
176
+ )
177
+ layer.mlp.dense_4h_to_h = QuantizedLinear(
178
+ weight_bit_width=weight_bit_width,
179
+ weight_tensor=layer.mlp.dense_4h_to_h.weight.to(torch.cuda.current_device()),
180
+ bias_tensor=layer.mlp.dense_4h_to_h.bias,
181
+ in_features=layer.mlp.dense_4h_to_h.in_features,
182
+ out_features=layer.mlp.dense_4h_to_h.out_features,
183
+ bias=True,
184
+ dtype=torch.half,
185
+ device=layer.mlp.dense_4h_to_h.weight.device,
186
+ )
187
+ return model
tokenization_chatglm.py ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tokenization classes for ChatGLM."""
2
+ import sys
3
+ import unicodedata
4
+ from typing import List, Optional, Union
5
+ from functools import lru_cache
6
+ import os
7
+ import collections
8
+ import re
9
+
10
+ from transformers.tokenization_utils import PreTrainedTokenizer
11
+ from icetk.text_tokenizer import TextTokenizer
12
+ from icetk.utils import auto_create
13
+ import icetk.sentencepiece_model_pb2 as sp_model
14
+ from transformers.utils import logging
15
+
16
+ logger = logging.get_logger(__name__)
17
+
18
+ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
19
+ "THUDM/chatglm-6b": 2048,
20
+ }
21
+
22
+
23
+ class SPTokenizer:
24
+ def __init__(
25
+ self,
26
+ vocab_file,
27
+ max_blank_length=80,
28
+ byte_fallback=True,
29
+ ):
30
+ assert vocab_file is not None
31
+ self.vocab_file = vocab_file
32
+ self.special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "<unused_0>", "<sop>", "<eop>", "<ENC>", "<dBLOCK>"]
33
+ self.max_blank_length = max_blank_length
34
+ self.byte_fallback = byte_fallback
35
+ self.text_tokenizer = self._build_text_tokenizer(encode_special_tokens=False)
36
+ self.special_text_tokenizer = self._build_text_tokenizer(encode_special_tokens=True)
37
+
38
+ @staticmethod
39
+ def _configure_tokenizer(
40
+ text_tokenizer: TextTokenizer,
41
+ special_tokens: List[str],
42
+ max_blank_length: int,
43
+ byte_fallback: bool,
44
+ encode_special_tokens=False,
45
+ ):
46
+ # special token
47
+ special_token_type = 4 if encode_special_tokens else 3 # 3 - CONTROL, 4 - USER_DEFINE
48
+ for token in special_tokens:
49
+ text_tokenizer.proto.pieces.append(
50
+ sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=special_token_type)
51
+ )
52
+ # whitespaces
53
+ for token in [SPTokenizer.get_tab_token()] + [
54
+ SPTokenizer.get_blank_token(i) for i in range(2, max_blank_length + 1)
55
+ ]:
56
+ text_tokenizer.proto.pieces.append(sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=4))
57
+ # byte fallback
58
+ if byte_fallback:
59
+ text_tokenizer.proto.trainer_spec.byte_fallback = True
60
+ for i in range(256):
61
+ text_tokenizer.proto.pieces.append(
62
+ sp_model.ModelProto.SentencePiece(piece="<0x{:02X}>".format(i), score=0.0, type=6)
63
+ )
64
+ text_tokenizer.refresh()
65
+
66
+ def _build_text_tokenizer(self, encode_special_tokens=False):
67
+ tokenizer = TextTokenizer(self.vocab_file)
68
+ self._configure_tokenizer(
69
+ tokenizer, self.special_tokens, self.max_blank_length, self.byte_fallback, encode_special_tokens
70
+ )
71
+ return tokenizer
72
+
73
+ def _get_text_tokenizer(self, encode_special_tokens=False):
74
+ if encode_special_tokens:
75
+ return self.special_text_tokenizer
76
+ else:
77
+ return self.text_tokenizer
78
+
79
+ @staticmethod
80
+ def get_blank_token(length: int):
81
+ assert length >= 2
82
+ return f"<|blank_{length}|>"
83
+
84
+ @staticmethod
85
+ def get_tab_token():
86
+ return f"<|tab|>"
87
+
88
+ @property
89
+ def num_image_tokens(self):
90
+ return 20000
91
+
92
+ @property
93
+ def num_text_tokens(self):
94
+ return self.text_tokenizer.num_tokens
95
+
96
+ @property
97
+ def num_tokens(self):
98
+ return self.num_image_tokens + self.num_text_tokens
99
+
100
+ @staticmethod
101
+ def _encode_whitespaces(text: str, max_len: int = 80):
102
+ text = text.replace("\t", SPTokenizer.get_tab_token())
103
+ for i in range(max_len, 1, -1):
104
+ text = text.replace(" " * i, SPTokenizer.get_blank_token(i))
105
+ return text
106
+
107
+ def _preprocess(self, text: str, linebreak=True, whitespaces=True):
108
+ if linebreak:
109
+ text = text.replace("\n", "<n>")
110
+ if whitespaces:
111
+ text = self._encode_whitespaces(text, max_len=self.max_blank_length)
112
+ return text
113
+
114
+ def encode(
115
+ self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
116
+ ) -> List[int]:
117
+ """
118
+ @param text: Text to encode.
119
+ @param linebreak: Whether to encode newline (\n) in text.
120
+ @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
121
+ @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
122
+ @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
123
+ """
124
+ text = self._preprocess(text, linebreak, whitespaces)
125
+ if not add_dummy_prefix:
126
+ text = "<n>" + text
127
+ tmp = self._get_text_tokenizer(encode_special_tokens=special_tokens).encode(text)
128
+ tokens = [x + self.num_image_tokens for x in tmp]
129
+ return tokens if add_dummy_prefix else tokens[2:]
130
+
131
+ def decode(self, text_ids: List[int], special_tokens=False) -> str:
132
+ ids = [int(_id) - self.num_image_tokens for _id in text_ids]
133
+ ids = [_id for _id in ids if _id >= 0]
134
+ text = self._get_text_tokenizer(encode_special_tokens=special_tokens).decode(ids)
135
+ text = text.replace("<n>", "\n")
136
+ text = text.replace(SPTokenizer.get_tab_token(), "\t")
137
+ for i in range(2, self.max_blank_length + 1):
138
+ text = text.replace(self.get_blank_token(i), " " * i)
139
+ return text
140
+
141
+ def tokenize(
142
+ self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
143
+ ) -> List[str]:
144
+ """
145
+ @param text: Text to encode.
146
+ @param linebreak: Whether to encode newline (\n) in text.
147
+ @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
148
+ @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
149
+ @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
150
+ """
151
+ text = self._preprocess(text, linebreak, whitespaces)
152
+ if not add_dummy_prefix:
153
+ text = "<n>" + text
154
+ tokens = self._get_text_tokenizer(encode_special_tokens=special_tokens).tokenize(text)
155
+ return tokens if add_dummy_prefix else tokens[2:]
156
+
157
+ def __getitem__(self, x: Union[int, str]):
158
+ if isinstance(x, int):
159
+ if x < self.num_image_tokens:
160
+ return "<image_{}>".format(x)
161
+ else:
162
+ return self.text_tokenizer.convert_id_to_token(x - self.num_image_tokens)
163
+ elif isinstance(x, str):
164
+ if x.startswith("<image_") and x.endswith(">") and x[7:-1].isdigit():
165
+ return int(x[7:-1])
166
+ else:
167
+ return self.text_tokenizer.convert_token_to_id(x) + self.num_image_tokens
168
+ else:
169
+ raise ValueError("The key should be str or int.")
170
+
171
+
172
+ class ChatGLMTokenizer(PreTrainedTokenizer):
173
+ """
174
+ Construct a ChatGLM tokenizer. Based on byte-level Byte-Pair-Encoding.
175
+
176
+ Args:
177
+ vocab_file (`str`):
178
+ Path to the vocabulary file.
179
+ """
180
+
181
+ vocab_files_names = {"vocab_file": "ice_text.model"}
182
+ max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
183
+ model_input_names = ["input_ids"]
184
+
185
+ def __init__(
186
+ self,
187
+ vocab_file,
188
+ do_lower_case=False,
189
+ remove_space=False,
190
+ bos_token='sop',
191
+ eos_token='eos',
192
+ eop_token='eop',
193
+ mask_token='[MASK]',
194
+ gmask_token='[gMASK]',
195
+ padding_side="left",
196
+ **kwargs
197
+ ) -> None:
198
+ super().__init__(
199
+ do_lower_case=do_lower_case,
200
+ remove_space=remove_space,
201
+ padding_side=padding_side,
202
+ **kwargs
203
+ )
204
+
205
+ self.do_lower_case = do_lower_case
206
+ self.remove_space = remove_space
207
+ self.vocab_file = vocab_file
208
+
209
+ self.bos_token = bos_token
210
+ self.eos_token = eos_token
211
+ self.eop_token = eop_token
212
+ self.mask_token = mask_token
213
+ self.gMASK_token = gmask_token
214
+
215
+ self.sp_tokenizer = SPTokenizer(vocab_file)
216
+
217
+ """ Initialisation """
218
+
219
+ @property
220
+ def eop_token_id(self) -> Optional[int]:
221
+ """
222
+ `Optional[int]`: Id of the end of sentence token in the vocabulary. Returns `None` if the token has not been
223
+ set.
224
+ """
225
+ if self.eop_token is None:
226
+ return None
227
+ return self.convert_tokens_to_ids(self.eop_token)
228
+
229
+ @property
230
+ def vocab_size(self):
231
+ """ Returns vocab size """
232
+ return self.sp_tokenizer.num_tokens
233
+
234
+ def get_vocab(self):
235
+ """ Returns vocab as a dict """
236
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
237
+ vocab.update(self.added_tokens_encoder)
238
+ return vocab
239
+
240
+ def preprocess_text(self, inputs):
241
+ if self.remove_space:
242
+ outputs = " ".join(inputs.strip().split())
243
+ else:
244
+ outputs = inputs
245
+
246
+ if self.do_lower_case:
247
+ outputs = outputs.lower()
248
+
249
+ return outputs
250
+
251
+ def _tokenize(self, text, **kwargs):
252
+ """ Returns a tokenized string. """
253
+ text = self.preprocess_text(text)
254
+
255
+ seq = self.sp_tokenizer.tokenize(text)
256
+
257
+ return seq
258
+
259
+ def decode(
260
+ self,
261
+ token_ids: Union[List[int], List[List[int]]],
262
+ skip_special_tokens: bool = False,
263
+ clean_up_tokenization_spaces: bool = True,
264
+ spaces_between_special_tokens: bool = True,
265
+ **kwargs
266
+ ) -> str:
267
+ if isinstance(token_ids[0], list):
268
+ tokens = []
269
+ for single_token_ids in token_ids:
270
+ if self.pad_token_id in single_token_ids: # remove pad
271
+ single_token_ids = list(filter((self.pad_token_id).__ne__, single_token_ids))
272
+ tokens.append(self.sp_tokenizer.decode(single_token_ids))
273
+ return (tokens)
274
+ else:
275
+ if self.pad_token_id in token_ids: # remove pad
276
+ token_ids = list(filter((self.pad_token_id).__ne__, token_ids))
277
+ return self.sp_tokenizer.decode(token_ids)
278
+
279
+ def _convert_token_to_id(self, token):
280
+ """ Converts a token (str) in an id using the vocab. """
281
+ return self.sp_tokenizer[token]
282
+
283
+ def _convert_id_to_token(self, index):
284
+ """Converts an index (integer) in a token (str) using the vocab."""
285
+ return self.sp_tokenizer[index]
286
+
287
+ def save_vocabulary(self, save_directory, filename_prefix=None):
288
+ """
289
+ Save the vocabulary and special tokens file to a directory.
290
+
291
+ Args:
292
+ save_directory (`str`):
293
+ The directory in which to save the vocabulary.
294
+ filename_prefix (`str`, *optional*):
295
+ An optional prefix to add to the named of the saved files.
296
+
297
+ Returns:
298
+ `Tuple(str)`: Paths to the files saved.
299
+ """
300
+ if os.path.isdir(save_directory):
301
+ vocab_file = os.path.join(
302
+ save_directory, self.vocab_files_names["vocab_file"]
303
+ )
304
+ else:
305
+ vocab_file = save_directory
306
+
307
+ with open(self.vocab_file, 'rb') as fin:
308
+ proto_str = fin.read()
309
+
310
+ with open(vocab_file, "wb") as writer:
311
+ writer.write(proto_str)
312
+
313
+ return (vocab_file,)
314
+
315
+ def build_inputs_with_special_tokens(
316
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
317
+ ) -> List[int]:
318
+ """
319
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
320
+ adding special tokens. A BERT sequence has the following format:
321
+
322
+ - single sequence: `[CLS] X [SEP]`
323
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
324
+
325
+ Args:
326
+ token_ids_0 (`List[int]`):
327
+ List of IDs to which the special tokens will be added.
328
+ token_ids_1 (`List[int]`, *optional*):
329
+ Optional second list of IDs for sequence pairs.
330
+
331
+ Returns:
332
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
333
+ """
334
+ if token_ids_1 is not None:
335
+ token_ids_0 += token_ids_1
336
+ mask_ids = self.sp_tokenizer[self.mask_token]
337
+ gmask_ids = self.sp_tokenizer[self.gMASK_token]
338
+ if mask_ids not in token_ids_0 and gmask_ids not in token_ids_0:
339
+ token_ids_0 += [gmask_ids]
340
+
341
+ if token_ids_0[-1] != mask_ids and token_ids_0[-1] != gmask_ids:
342
+ token_ids_0 += [self.sp_tokenizer[self.eos_token]]
343
+
344
+ token_ids_0 += [self.sp_tokenizer[self.bos_token]]
345
+
346
+ return token_ids_0
tokenizer_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name_or_path": "THUDM/chatglm-6b",
3
+ "bos_token": "<sop>",
4
+ "eop_token": "<eop>",
5
+ "eos_token": "</s>",
6
+ "gmask_token": "[gMASK]",
7
+ "mask_token": "[MASK]",
8
+ "pad_token": "<pad>",
9
+ "unk_token": "<unk>",
10
+ "remove_space": false,
11
+ "do_lower_case": false,
12
+ "tokenizer_class": "ChatGLMTokenizer",
13
+ "auto_map": {
14
+ "AutoTokenizer": [
15
+ "tokenization_chatglm.ChatGLMTokenizer",
16
+ null
17
+ ]
18
+ }
19
+ }