system HF staff commited on
Commit
94fb979
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,609 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - id
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ bapos:
14
+ - 10K<n<100K
15
+ casa:
16
+ - 1K<n<10K
17
+ emot:
18
+ - 1K<n<10K
19
+ facqa:
20
+ - 1K<n<10K
21
+ hoasa:
22
+ - n<1K
23
+ keps:
24
+ - 1K<n<10K
25
+ nergrit:
26
+ - 1K<n<10K
27
+ nerp:
28
+ - 1K<n<10K
29
+ posp:
30
+ - 1K<n<10K
31
+ smsa:
32
+ - 10K<n<100K
33
+ terma:
34
+ - 1K<n<10K
35
+ wrete:
36
+ - n<1K
37
+ source_datasets:
38
+ - original
39
+ task_categories:
40
+ bapos:
41
+ - structure-prediction
42
+ casa:
43
+ - text-classification
44
+ emot:
45
+ - text-classification
46
+ facqa:
47
+ - question-answering
48
+ hoasa:
49
+ - text-classification
50
+ keps:
51
+ - structure-prediction
52
+ nergrit:
53
+ - structure-prediction
54
+ nerp:
55
+ - structure-prediction
56
+ posp:
57
+ - structure-prediction
58
+ smsa:
59
+ - text-classification
60
+ terma:
61
+ - structure-prediction
62
+ wrete:
63
+ - text-classification
64
+ task_ids:
65
+ bapos:
66
+ - structure-prediction-other-part-of-speech-tagging
67
+ casa:
68
+ - text-classification-other-aspect-based-sentiment-analysis
69
+ emot:
70
+ - multi-class-classification
71
+ facqa:
72
+ - closed-domain-qa
73
+ hoasa:
74
+ - text-classification-other-aspect-based-sentiment-analysis
75
+ keps:
76
+ - structure-prediction-other-keyphrase-extraction
77
+ nergrit:
78
+ - named-entity-recognition
79
+ nerp:
80
+ - named-entity-recognition
81
+ posp:
82
+ - structure-prediction-other-part-of-speech-tagging
83
+ smsa:
84
+ - sentiment-classification
85
+ terma:
86
+ - structure-prediction-other-span-extraction
87
+ wrete:
88
+ - semantic-similarity-classification
89
+ ---
90
+
91
+ # Dataset Card for IndoNLU
92
+
93
+ ## Table of Contents
94
+ - [Dataset Description](#dataset-description)
95
+ - [Dataset Summary](#dataset-summary)
96
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
97
+ - [Languages](#languages)
98
+ - [Dataset Structure](#dataset-structure)
99
+ - [Data Instances](#data-instances)
100
+ - [Data Fields](#data-instances)
101
+ - [Data Splits](#data-instances)
102
+ - [Dataset Creation](#dataset-creation)
103
+ - [Curation Rationale](#curation-rationale)
104
+ - [Source Data](#source-data)
105
+ - [Annotations](#annotations)
106
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
107
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
108
+ - [Social Impact of Dataset](#social-impact-of-dataset)
109
+ - [Discussion of Biases](#discussion-of-biases)
110
+ - [Other Known Limitations](#other-known-limitations)
111
+ - [Additional Information](#additional-information)
112
+ - [Dataset Curators](#dataset-curators)
113
+ - [Licensing Information](#licensing-information)
114
+ - [Citation Information](#citation-information)
115
+
116
+ ## Dataset Description
117
+
118
+ - **Homepage:** [IndoNLU Website](https://www.indobenchmark.com/)
119
+ - **Repository:** [IndoNLU GitHub](https://github.com/indobenchmark/indonlu)
120
+ - **Paper:** [IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding](https://www.aclweb.org/anthology/2020aacl-main.85.pdf)
121
+ - **Leaderboard:** [Needs More Information]
122
+ - **Point of Contact:** [Needs More Information]
123
+
124
+ ### Dataset Summary
125
+
126
+ The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia (Indonesian language).
127
+ There are 12 datasets in IndoNLU benchmark for Indonesian natural language understanding.
128
+ 1. `EmoT`: An emotion classification dataset collected from the social media platform Twitter. The dataset consists of around 4000 Indonesian colloquial language tweets, covering five different emotion labels: anger, fear, happy, love, and sadness
129
+ 2. `SmSA`: This sentence-level sentiment analysis dataset is a collection of comments and reviews in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists to construct this dataset. There are three possible sentiments on the `SmSA` dataset: positive, negative, and neutral
130
+ 3. `CASA`: An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected from multiple Indonesian online automobile platforms. The dataset covers six aspects of car quality. We define the task to be a multi-label classification task, where each label represents a sentiment for a single aspect with three possible values: positive, negative, and neutral.
131
+ 4. `HoASA`: An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel aggregator platform, [AiryRooms](https://github.com/annisanurulazhar/absa-playground). The dataset covers ten different aspects of hotel quality. Similar to the `CASA` dataset, each review is labeled with a single sentiment label for each aspect. There are four possible sentiment classes for each sentiment label: positive, negative, neutral, and positive-negative. The positivenegative label is given to a review that contains multiple sentiments of the same aspect but for different objects (e.g., cleanliness of bed and toilet).
132
+ 5. `WReTE`: The Wiki Revision Edits Textual Entailment dataset consists of 450 sentence pairs constructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic relations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be derived from the first one, and not entailed otherwise.
133
+ 6. `POSP`: This Indonesian part-of-speech tagging (POS) dataset is collected from Indonesian news websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the [Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention](http://inacl.id/inacl/wp-content/uploads/2017/06/INACL-POS-Tagging-Convention-26-Mei.pdf).
134
+ 7. `BaPOS`: This POS tagging dataset contains about 1000 sentences, collected from the [PAN Localization Project](http://www.panl10n.net/). In this dataset, each word is tagged by one of [23 POS tag classes](https://bahasa.cs.ui.ac.id/postag/downloads/Tagset.pdf). Data splitting used in this benchmark follows the experimental setting used by [Kurniawan and Aji (2018)](https://arxiv.org/abs/1809.03391).
135
+ 8. `TermA`: This span-extraction dataset is collected from the hotel aggregator platform, [AiryRooms](https://github.com/jordhy97/final_project). The dataset consists of thousands of hotel reviews, which each contain a span label for aspect and sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use Inside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment.
136
+ 9. `KEPS`: This keyphrase extraction dataset consists of text from Twitter discussing banking products and services and is written in the Indonesian language. A phrase containing important information is considered a keyphrase. Text may contain one or more keyphrases since important phrases can be located at different positions. The dataset follows the IOB chunking format, which represents the position of the keyphrase.
137
+ 10. `NERGrit`: This NER dataset is taken from the [Grit-ID repository](https://github.com/grit-id/nergrit-corpus), and the labels are spans in IOB chunking representation. The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and ORGANIZATION (name of organization).
138
+ 11. `NERP`: This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites. There are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand), EVT (name of the event), and FNB (name of food and beverage). Similar to the `TermA` dataset, the `NERP` dataset uses the IOB chunking format.
139
+ 12. `FacQA`: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article. Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the corresponding short passage. There are six categories of questions: date, location, name, organization, person, and quantitative.
140
+
141
+ ### Supported Tasks and Leaderboards
142
+
143
+ [Needs More Information]
144
+
145
+ ### Languages
146
+
147
+ Indonesian
148
+
149
+ ## Dataset Structure
150
+
151
+ ### Data Instances
152
+
153
+ 1. `EmoT` dataset
154
+
155
+ A data point consists of `tweet` and `label`. An example from the train set looks as follows:
156
+ ```
157
+ {
158
+ 'tweet': 'Ini adalah hal yang paling membahagiakan saat biasku foto bersama ELF #ReturnOfTheLittlePrince #HappyHeeChulDay'
159
+ 'label': 4,
160
+ }
161
+ ```
162
+
163
+ 2. `SmSA` dataset
164
+
165
+ A data point consists of `text` and `label`. An example from the train set looks as follows:
166
+ ```
167
+ {
168
+ 'text': 'warung ini dimiliki oleh pengusaha pabrik tahu yang sudah puluhan tahun terkenal membuat tahu putih di bandung . tahu berkualitas , dipadu keahlian memasak , dipadu kretivitas , jadilah warung yang menyajikan menu utama berbahan tahu , ditambah menu umum lain seperti ayam . semuanya selera indonesia . harga cukup terjangkau . jangan lewatkan tahu bletoka nya , tidak kalah dengan yang asli dari tegal !'
169
+ 'label': 0,
170
+ }
171
+ ```
172
+
173
+ 3. `CASA` dataset
174
+
175
+ A data point consists of `sentence` and multi-label `feature`, `machine`, `others`, `part`, `price`, and `service`. An example from the train set looks as follows:
176
+ ```
177
+ {
178
+ 'sentence': 'Saya memakai Honda Jazz GK5 tahun 2014 ( pertama meluncur ) . Mobil nya bagus dan enak sesuai moto nya menyenangkan untuk dikendarai',
179
+ 'fuel': 1,
180
+ 'machine': 1,
181
+ 'others': 2,
182
+ 'part': 1,
183
+ 'price': 1,
184
+ 'service': 1
185
+ }
186
+ ```
187
+
188
+ 4. `HoASA` dataset
189
+
190
+ A data point consists of `sentence` and multi-label `ac`, `air_panas`, `bau`, `general`, `kebersihan`, `linen`, `service`, `sunrise_meal`, `tv`, and `wifi`. An example from the train set looks as follows:
191
+ ```
192
+ {
193
+ 'sentence': 'kebersihan kurang...',
194
+ 'ac': 1,
195
+ 'air_panas': 1,
196
+ 'bau': 1,
197
+ 'general': 1,
198
+ 'kebersihan': 0,
199
+ 'linen': 1,
200
+ 'service': 1,
201
+ 'sunrise_meal': 1,
202
+ 'tv': 1,
203
+ 'wifi': 1
204
+ }
205
+ ```
206
+
207
+ 5. `WreTE` dataset
208
+
209
+ A data point consists of `premise`, `hypothesis`, `category`, and `label`. An example from the train set looks as follows:
210
+ ```
211
+ {
212
+ 'premise': 'Pada awalnya bangsa Israel hanya terdiri dari satu kelompok keluarga di antara banyak kelompok keluarga yang hidup di tanah Kanan pada abad 18 SM .',
213
+ 'hypothesis': 'Pada awalnya bangsa Yahudi hanya terdiri dari satu kelompok keluarga di antara banyak kelompok keluarga yang hidup di tanah Kanan pada abad 18 SM .'
214
+ 'category': 'menolak perubahan teks terakhir oleh istimewa kontribusi pengguna 141 109 98 87 141 109 98 87 dan mengembalikan revisi 6958053 oleh johnthorne',
215
+ 'label': 0,
216
+ }
217
+ ```
218
+
219
+ 6. `POSP` dataset
220
+
221
+ A data point consists of `tokens` and `pos_tags`. An example from the train set looks as follows:
222
+ ```
223
+ {
224
+ 'tokens': ['kepala', 'dinas', 'tata', 'kota', 'manado', 'amos', 'kenda', 'menyatakan', 'tidak', 'tahu', '-', 'menahu', 'soal', 'pencabutan', 'baliho', '.', 'ia', 'enggan', 'berkomentar', 'banyak', 'karena', 'merasa', 'bukan', 'kewenangannya', '.'],
225
+ 'pos_tags': [11, 6, 11, 11, 7, 7, 7, 9, 23, 4, 21, 9, 11, 11, 11, 21, 3, 2, 4, 1, 19, 9, 23, 11, 21]
226
+ }
227
+ ```
228
+
229
+ 7. `BaPOS` dataset
230
+
231
+ A data point consists of `tokens` and `pos_tags`. An example from the train set looks as follows:
232
+ ```
233
+ {
234
+ 'tokens': ['Kera', 'untuk', 'amankan', 'pesta', 'olahraga'],
235
+ 'pos_tags': [27, 8, 26, 27, 30]
236
+ }
237
+ ```
238
+
239
+ 8. `TermA` dataset
240
+
241
+ A data point consists of `tokens` and `seq_label`. An example from the train set looks as follows:
242
+ ```
243
+ {
244
+ 'tokens': ['kamar', 'saya', 'ada', 'kendala', 'di', 'ac', 'tidak', 'berfungsi', 'optimal', '.', 'dan', 'juga', 'wifi', 'koneksi', 'kurang', 'stabil', '.'],
245
+ 'seq_label': [1, 1, 1, 1, 1, 4, 3, 0, 0, 1, 1, 1, 4, 2, 3, 0, 1]
246
+ }
247
+ ```
248
+
249
+ 9. `KEPS` dataset
250
+
251
+ A data point consists of `tokens` and `seq_label`. An example from the train set looks as follows:
252
+ ```
253
+ {
254
+ 'tokens': ['Setelah', 'melalui', 'proses', 'telepon', 'yang', 'panjang', 'tutup', 'sudah', 'kartu', 'kredit', 'bca', 'Ribet'],
255
+ 'seq_label': [0, 1, 1, 2, 0, 0, 1, 0, 1, 2, 2, 1]
256
+ }
257
+ ```
258
+
259
+ 10. `NERGrit` dataset
260
+
261
+ A data point consists of `tokens` and `ner_tags`. An example from the train set looks as follows:
262
+ ```
263
+ {
264
+ 'tokens': ['Kontribusinya', 'terhadap', 'industri', 'musik', 'telah', 'mengumpulkan', 'banyak', 'prestasi', 'termasuk', 'lima', 'Grammy', 'Awards', ',', 'serta', 'dua', 'belas', 'nominasi', ';', 'dua', 'Guinness', 'World', 'Records', ';', 'dan', 'penjualannya', 'diperkirakan', 'sekitar', '64', 'juta', 'rekaman', '.'],
265
+ 'ner_tags': [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]}
266
+ ```
267
+
268
+ 11. `NERP` dataset
269
+
270
+ A data point consists of `tokens` and `ner_tags`. An example from the train set looks as follows:
271
+ ```
272
+ {
273
+ 'tokens': ['kepala', 'dinas', 'tata', 'kota', 'manado', 'amos', 'kenda', 'menyatakan', 'tidak', 'tahu', '-', 'menahu', 'soal', 'pencabutan', 'baliho', '.', 'ia', 'enggan', 'berkomentar', 'banyak', 'karena', 'merasa', 'bukan', 'kewenangannya', '.'],
274
+ 'ner_tags': [9, 9, 9, 9, 2, 7, 0, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
275
+ }
276
+ ```
277
+
278
+ 12. `FacQA` dataset
279
+
280
+ A data point consists of `question`, `passage`, and `seq_label`. An example from the train set looks as follows:
281
+ ```
282
+ {
283
+ 'passage': ['Lewat', 'telepon', 'ke', 'kantor', 'berita', 'lokal', 'Current', 'News', 'Service', ',', 'Hezb-ul', 'Mujahedeen', ',', 'kelompok', 'militan', 'Kashmir', 'yang', 'terbesar', ',', 'menyatakan', 'bertanggung', 'jawab', 'atas', 'ledakan', 'di', 'Srinagar', '.'],
284
+ 'question': ['Kelompok', 'apakah', 'yang', 'menyatakan', 'bertanggung', 'jawab', 'atas', 'ledakan', 'di', 'Srinagar', '?'],
285
+ 'seq_label': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
286
+ }
287
+ ```
288
+
289
+ ### Data Fields
290
+
291
+ 1. `EmoT` dataset
292
+
293
+ - `tweet`: a `string` feature.
294
+ - `label`: an emotion label, with possible values including `sadness`, `anger`, `love`, `fear`, `happy`.
295
+
296
+ 2. `SmSA` dataset
297
+
298
+ - `text`: a `string` feature.
299
+ - `label`: a sentiment label, with possible values including `positive`, `neutral`, `negative`.
300
+
301
+ 3. `CASA` dataset
302
+
303
+ - `sentence`: a `string` feature.
304
+ - `fuel`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
305
+ - `machine`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
306
+ - `others`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
307
+ - `part`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
308
+ - `price`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
309
+ - `service`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
310
+
311
+ 4. `HoASA` dataset
312
+
313
+ - `sentence`: a `string` feature.
314
+ - `ac`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
315
+ - `air_panas`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
316
+ - `bau`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
317
+ - `general`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
318
+ - `kebersihan`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
319
+ - `linen`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
320
+ - `service`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
321
+ - `sunrise_meal`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
322
+ - `tv`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
323
+ - `wifi`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
324
+
325
+ 5. `WReTE` dataset
326
+
327
+ - `premise`: a `string` feature.
328
+ - `hypothesis`: a `string` feature.
329
+ - `category`: a `string` feature.
330
+ - `label`: a classification label, with possible values including `NotEntail`, `Entail_or_Paraphrase`.
331
+
332
+ 6. `POSP` dataset
333
+
334
+ - `tokens`: a `list` of `string` features.
335
+ - `pos_tags`: a `list` of POS tag labels, with possible values including `B-PPO`, `B-KUA`, `B-ADV`, `B-PRN`, `B-VBI`.
336
+
337
+ The POS tag labels follow the [Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention](http://inacl.id/inacl/wp-content/uploads/2017/06/INACLPOS-Tagging-Convention-26-Mei.pdf).
338
+
339
+ 7. `BaPOS` dataset
340
+
341
+ - `tokens`: a `list` of `string` features.
342
+ - `pos_tags`: a `list` of POS tag labels, with possible values including `B-PR`, `B-CD`, `I-PR`, `B-SYM`, `B-JJ`.
343
+
344
+ The POS tag labels from [Tagset UI](https://bahasa.cs.ui.ac.id/postag/downloads/Tagset.pdf).
345
+
346
+ 8. `TermA` dataset
347
+
348
+ - `tokens`: a `list` of `string` features.
349
+ - `seq_label`: a `list` of classification labels, with possible values including `I-SENTIMENT`, `O`, `I-ASPECT`, `B-SENTIMENT`, `B-ASPECT`.
350
+
351
+ 9. `KEPS` dataset
352
+
353
+ - `tokens`: a `list` of `string` features.
354
+ - `seq_label`: a `list` of classification labels, with possible values including `O`, `B`, `I`.
355
+
356
+ The labels use Inside-Outside-Beginning (IOB) tagging.
357
+
358
+ 10. `NERGrit` dataset
359
+
360
+ - `tokens`: a `list` of `string` features.
361
+ - `ner_tags`: a `list` of NER tag labels, with possible values including `I-PERSON`, `B-ORGANISATION`, `I-ORGANISATION`, `B-PLACE`, `I-PLACE`.
362
+
363
+ The labels use Inside-Outside-Beginning (IOB) tagging.
364
+
365
+ 11. `NERP` dataset
366
+
367
+ - `tokens`: a `list` of `string` features.
368
+ - `ner_tags`: a `list` of NER tag labels, with possible values including `I-PPL`, `B-EVT`, `B-PLC`, `I-IND`, `B-IND`.
369
+
370
+ 12. `FacQA` dataset
371
+
372
+ - `question`: a `list` of `string` features.
373
+ - `passage`: a `list` of `string` features.
374
+ - `seq_label`: a `list` of classification labels, with possible values including `O`, `B`, `I`.
375
+
376
+ ### Data Splits
377
+
378
+ The data is split into a training, validation and test set.
379
+
380
+ | | dataset | Train | Valid | Test |
381
+ |----|---------|-------|-------|------|
382
+ | 1 | EmoT | 3521 | 440 | 440 |
383
+ | 2 | SmSA | 11000 | 1260 | 500 |
384
+ | 3 | CASA | 810 | 90 | 180 |
385
+ | 4 | HoASA | 2283 | 285 | 286 |
386
+ | 5 | WReTE | 300 | 50 | 100 |
387
+ | 6 | POSP | 6720 | 840 | 840 |
388
+ | 7 | BaPOS | 8000 | 1000 | 1029 |
389
+ | 8 | TermA | 3000 | 1000 | 1000 |
390
+ | 9 | KEPS | 800 | 200 | 247 |
391
+ | 10 | NERGrit | 1672 | 209 | 209 |
392
+ | 11 | NERP | 6720 | 840 | 840 |
393
+ | 12 | FacQA | 2495 | 311 | 311 |
394
+
395
+ ## Dataset Creation
396
+
397
+ ### Curation Rationale
398
+
399
+ [Needs More Information]
400
+
401
+ ### Source Data
402
+
403
+ #### Initial Data Collection and Normalization
404
+
405
+ [Needs More Information]
406
+
407
+ #### Who are the source language producers?
408
+
409
+ [Needs More Information]
410
+
411
+ ### Annotations
412
+
413
+ #### Annotation process
414
+
415
+ [Needs More Information]
416
+
417
+ #### Who are the annotators?
418
+
419
+ [Needs More Information]
420
+
421
+ ### Personal and Sensitive Information
422
+
423
+ [Needs More Information]
424
+
425
+ ## Considerations for Using the Data
426
+
427
+ ### Social Impact of Dataset
428
+
429
+ [Needs More Information]
430
+
431
+ ### Discussion of Biases
432
+
433
+ [Needs More Information]
434
+
435
+ ### Other Known Limitations
436
+
437
+ [Needs More Information]
438
+
439
+ ## Additional Information
440
+
441
+ ### Dataset Curators
442
+
443
+ [Needs More Information]
444
+
445
+ ### Licensing Information
446
+
447
+ The licensing status of the IndoNLU benchmark datasets is under MIT License.
448
+
449
+ ### Citation Information
450
+
451
+ IndoNLU citation
452
+ ```
453
+ @inproceedings{wilie2020indonlu,
454
+ title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
455
+ author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
456
+ booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
457
+ year={2020}
458
+ }
459
+ ```
460
+
461
+ `EmoT` dataset citation
462
+ ```
463
+ @inproceedings{saputri2018emotion,
464
+ title={Emotion Classification on Indonesian Twitter Dataset},
465
+ author={Mei Silviana Saputri, Rahmad Mahendra, and Mirna Adriani},
466
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},
467
+ pages={90--95},
468
+ year={2018},
469
+ organization={IEEE}
470
+ }
471
+ ```
472
+
473
+ `SmSA` dataset citation
474
+ ```
475
+ @inproceedings{purwarianti2019improving,
476
+ title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector},
477
+ author={Ayu Purwarianti and Ida Ayu Putu Ari Crisdayanti},
478
+ booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
479
+ pages={1--5},
480
+ year={2019},
481
+ organization={IEEE}
482
+ }
483
+ ```
484
+
485
+ `CASA` dataset citation
486
+ ```
487
+ @inproceedings{ilmania2018aspect,
488
+ title={Aspect Detection and Sentiment Classification Using Deep Neural Network for Indonesian Aspect-based Sentiment Analysis},
489
+ author={Arfinda Ilmania, Abdurrahman, Samuel Cahyawijaya, Ayu Purwarianti},
490
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},
491
+ pages={62--67},
492
+ year={2018},
493
+ organization={IEEE}
494
+ }
495
+ ```
496
+
497
+ `HoASA` dataset citation
498
+ ```
499
+ @inproceedings{azhar2019multi,
500
+ title={Multi-label Aspect Categorization with Convolutional Neural Networks and Extreme Gradient Boosting},
501
+ author={A. N. Azhar, M. L. Khodra, and A. P. Sutiono}
502
+ booktitle={Proceedings of the 2019 International Conference on Electrical Engineering and Informatics (ICEEI)},
503
+ pages={35--40},
504
+ year={2019}
505
+ }
506
+ ```
507
+
508
+ `WReTE` dataset citation
509
+ ```
510
+ @inproceedings{setya2018semi,
511
+ title={Semi-supervised Textual Entailment on Indonesian Wikipedia Data},
512
+ author={Ken Nabila Setya and Rahmad Mahendra},
513
+ booktitle={Proceedings of the 2018 International Conference on Computational Linguistics and Intelligent Text Processing (CICLing)},
514
+ year={2018}
515
+ }
516
+ ```
517
+
518
+ `POSP` dataset citation
519
+ ```
520
+ @inproceedings{hoesen2018investigating,
521
+ title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
522
+ author={Devin Hoesen and Ayu Purwarianti},
523
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
524
+ pages={35--38},
525
+ year={2018},
526
+ organization={IEEE}
527
+ }
528
+ ```
529
+
530
+ `BaPOS` dataset citation
531
+ ```
532
+ @inproceedings{dinakaramani2014designing,
533
+ title={Designing an Indonesian Part of Speech Tagset and Manually Tagged Indonesian Corpus},
534
+ author={Arawinda Dinakaramani, Fam Rashel, Andry Luthfi, and Ruli Manurung},
535
+ booktitle={Proceedings of the 2014 International Conference on Asian Language Processing (IALP)},
536
+ pages={66--69},
537
+ year={2014},
538
+ organization={IEEE}
539
+ }
540
+ @inproceedings{kurniawan2018toward,
541
+ title={Toward a Standardized and More Accurate Indonesian Part-of-Speech Tagging},
542
+ author={Kemal Kurniawan and Alham Fikri Aji},
543
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
544
+ pages={303--307},
545
+ year={2018},
546
+ organization={IEEE}
547
+ }
548
+ ```
549
+
550
+ `TermA` dataset citation
551
+ ```
552
+ @article{winatmoko2019aspect,
553
+ title={Aspect and Opinion Term Extraction for Hotel Reviews Using Transfer Learning and Auxiliary Labels},
554
+ author={Yosef Ardhito Winatmoko, Ali Akbar Septiandri, Arie Pratama Sutiono},
555
+ journal={arXiv preprint arXiv:1909.11879},
556
+ year={2019}
557
+ }
558
+ @article{fernando2019aspect,
559
+ title={Aspect and Opinion Terms Extraction Using Double Embeddings and Attention Mechanism for Indonesian Hotel Reviews},
560
+ author={Jordhy Fernando, Masayu Leylia Khodra, Ali Akbar Septiandri},
561
+ journal={arXiv preprint arXiv:1908.04899},
562
+ year={2019}
563
+ }
564
+ ```
565
+
566
+ `KEPS` dataset citation
567
+ ```
568
+ @inproceedings{mahfuzh2019improving,
569
+ title={Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features},
570
+ author={Miftahul Mahfuzh, Sidik Soleman, and Ayu Purwarianti},
571
+ booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
572
+ pages={1--6},
573
+ year={2019},
574
+ organization={IEEE}
575
+ }
576
+ ```
577
+
578
+ `NERGrit` dataset citation
579
+ ```
580
+ @online{nergrit2019,
581
+ title={NERGrit Corpus},
582
+ author={NERGrit Developers},
583
+ year={2019},
584
+ url={https://github.com/grit-id/nergrit-corpus}
585
+ }
586
+ ```
587
+
588
+ `NERP` dataset citation
589
+ ```
590
+ @inproceedings{hoesen2018investigating,
591
+ title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
592
+ author={Devin Hoesen and Ayu Purwarianti},
593
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
594
+ pages={35--38},
595
+ year={2018},
596
+ organization={IEEE}
597
+ }
598
+ ```
599
+
600
+ `FacQA` dataset citation
601
+ ```
602
+ @inproceedings{purwarianti2007machine,
603
+ title={A Machine Learning Approach for Indonesian Question Answering System},
604
+ author={Ayu Purwarianti, Masatoshi Tsuchiya, and Seiichi Nakagawa},
605
+ booktitle={Proceedings of Artificial Intelligence and Applications },
606
+ pages={573--578},
607
+ year={2007}
608
+ }
609
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"emot": {"description": "An emotion classification dataset collected from the social media\nplatform Twitter (Saputri et al., 2018). The dataset consists of\naround 4000 Indonesian colloquial language tweets, covering five\ndifferent emotion labels: sadness, anger, love, fear, and happy.", "citation": "@inproceedings{saputri2018emotion,\n title={Emotion Classification on Indonesian Twitter Dataset},\n author={Mei Silviana Saputri, Rahmad Mahendra, and Mirna Adriani},\n booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},\n pages={90--95},\n year={2018},\n organization={IEEE}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"tweet": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 5, "names": ["sadness", "anger", "love", "fear", "happy"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "emot", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 686418, "num_examples": 3521, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 84082, "num_examples": 440, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 84856, "num_examples": 440, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/emot_emotion-twitter/train_preprocess.csv": {"num_bytes": 674924, "checksum": "51bb4e77d989004d0ca49c158f404d7eda956015a18b805401f7dee9b4d85fc1"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/emot_emotion-twitter/valid_preprocess.csv": {"num_bytes": 82619, "checksum": "3cba3d7b2cc3afa5cdd452a13df2498ff8a32b420f59ed10a48e41a452c98f50"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/emot_emotion-twitter/test_preprocess_masked_label.csv": {"num_bytes": 83374, "checksum": "85426e71254016c9d85707b77a15ae494eb44b5d1412dbe937c2b74c133293e9"}}, "download_size": 840917, "post_processing_size": null, "dataset_size": 855356, "size_in_bytes": 1696273}, "smsa": {"description": "This sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019)\nis a collection of comments and reviews in Indonesian obtained from multiple online\nplatforms. The text was crawled and then annotated by several Indonesian linguists\nto construct this dataset. There are three possible sentiments on the SmSA\ndataset: positive, negative, and neutral.", "citation": "@inproceedings{purwarianti2019improving,\n title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector},\n author={Ayu Purwarianti and Ida Ayu Putu Ari Crisdayanti},\n booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},\n pages={1--5},\n year={2019},\n organization={IEEE}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["positive", "neutral", "negative"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "smsa", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2209874, "num_examples": 11000, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 249629, "num_examples": 1260, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 77041, "num_examples": 500, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/smsa_doc-sentiment-prosa/train_preprocess.tsv": {"num_bytes": 2186718, "checksum": "50f38ceed9b31521bf1581e126620532cc9b790712938159a2cdcf6906977a9b"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/smsa_doc-sentiment-prosa/valid_preprocess.tsv": {"num_bytes": 246974, "checksum": "6ab41ddc9d58a35086f05ebd2e209c74cb03d87d4f51d6abdfba674eafbefa74"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/smsa_doc-sentiment-prosa/test_preprocess_masked_label.tsv": {"num_bytes": 75537, "checksum": "ecc239ad7a069774954843da50eba266398bab70d0370f74684bf1c107b64e70"}}, "download_size": 2509229, "post_processing_size": null, "dataset_size": 2536544, "size_in_bytes": 5045773}, "casa": {"description": "An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected\nfrom multiple Indonesian online automobile platforms (Ilmania et al., 2018). The dataset covers\nsix aspects of car quality. We define the task to be a multi-label classification task, where\neach label represents a sentiment for a single aspect with three possible values: positive,\nnegative, and neutral.", "citation": "@inproceedings{ilmania2018aspect,\n title={Aspect Detection and Sentiment Classification Using Deep Neural Network for Indonesian Aspect-based Sentiment Analysis},\n author={Arfinda Ilmania, Abdurrahman, Samuel Cahyawijaya, Ayu Purwarianti},\n booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},\n pages={62--67},\n year={2018},\n organization={IEEE}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "fuel": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "machine": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "others": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "part": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "price": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "service": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "casa", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 110415, "num_examples": 810, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 11993, "num_examples": 90, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 23553, "num_examples": 180, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/casa_absa-prosa/train_preprocess.csv": {"num_bytes": 109756, "checksum": "ffd2a88edf5e270cea79ad84d2ca4170c9a2fd71a38540280d5eb3b95d261f76"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/casa_absa-prosa/valid_preprocess.csv": {"num_bytes": 11952, "checksum": "4ea114d060796e59944b1cf7f0ad7950bd0532024348a17d0f7c6b6464328424"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/casa_absa-prosa/test_preprocess_masked_label.csv": {"num_bytes": 23195, "checksum": "9732fbd03cad76753a7bd237d1e17c979003b92b6b04730583d7d1e11796dd81"}}, "download_size": 144903, "post_processing_size": null, "dataset_size": 145961, "size_in_bytes": 290864}, "hoasa": {"description": "An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel\naggregator platform, AiryRooms (Azhar et al., 2019). The dataset covers ten different aspects of\nhotel quality. Each review is labeled with a single sentiment label for each aspect. There are\nfour possible sentiment classes for each sentiment label: positive, negative, neutral, and\npositive-negative. The positivenegative label is given to a review that contains multiple sentiments\nof the same aspect but for different objects (e.g., cleanliness of bed and toilet).", "citation": "@inproceedings{azhar2019multi,\n title={Multi-label Aspect Categorization with Convolutional Neural Networks and Extreme Gradient Boosting},\n author={A. N. Azhar, M. L. Khodra, and A. P. Sutiono}\n booktitle={Proceedings of the 2019 International Conference on Electrical Engineering and Informatics (ICEEI)},\n pages={35--40},\n year={2019}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "ac": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "air_panas": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "bau": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "general": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "kebersihan": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "linen": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "service": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "sunrise_meal": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "tv": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "wifi": {"num_classes": 4, "names": ["neg", "neut", "pos", "neg_pos"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "hoasa", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 458177, "num_examples": 2283, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 58248, "num_examples": 285, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 56399, "num_examples": 286, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/hoasa_absa-airy/train_preprocess.csv": {"num_bytes": 381239, "checksum": "752935b62235f1a719c5e526e4ac68b3ba452f84a2a6f911ef20cb855b23546d"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/hoasa_absa-airy/valid_preprocess.csv": {"num_bytes": 48696, "checksum": "7109001762f0bd83526d3de224c0ba5302bfb781eee6c1334aac8039a188f4fa"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/hoasa_absa-airy/test_preprocess_masked_label.csv": {"num_bytes": 47379, "checksum": "92f8a0d9e9ceec02a3340444b4b79a4f606d27f7dbe5867150d34a8ca7634e09"}}, "download_size": 477314, "post_processing_size": null, "dataset_size": 572824, "size_in_bytes": 1050138}, "wrete": {"description": "The Wiki Revision Edits Textual Entailment dataset (Setya and Mahendra, 2018) consists of 450 sentence pairs\nconstructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic\nrelations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be\nderived from the first one, and not entailed otherwise.", "citation": "@inproceedings{setya2018semi,\n title={Semi-supervised Textual Entailment on Indonesian Wikipedia Data},\n author={Ken Nabila Setya and Rahmad Mahendra},\n booktitle={Proceedings of the 2018 International Conference on Computational Linguistics and Intelligent Text Processing (CICLing)},\n year={2018}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["NotEntail", "Entail_or_Paraphrase"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "wrete", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 99999, "num_examples": 300, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 18049, "num_examples": 50, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 32617, "num_examples": 100, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/wrete_entailment-ui/train_preprocess.csv": {"num_bytes": 100641, "checksum": "e135a85dad098127da179e305ebf0a1af63bc0cf06fdf79392293964d2920af3"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/wrete_entailment-ui/valid_preprocess.csv": {"num_bytes": 18191, "checksum": "623d0dc1389c37af6482277a05702329bddee96b73849f7341f9c5b269a55286"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/wrete_entailment-ui/test_preprocess_masked_label.csv": {"num_bytes": 32186, "checksum": "075a0de3a327546f04d83f345faffc45b4d2788ad7c3a2fdb11c23c29afba5a6"}}, "download_size": 151018, "post_processing_size": null, "dataset_size": 150665, "size_in_bytes": 301683}, "posp": {"description": "This Indonesian part-of-speech tagging (POS) dataset (Hoesen and Purwarianti, 2018) is collected from Indonesian\nnews websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the\nIndonesian Association of Computational Linguistics (INACL) POS Tagging Convention.", "citation": "@inproceedings{hoesen2018investigating,\n title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},\n author={Devin Hoesen and Ayu Purwarianti},\n booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},\n pages={35--38},\n year={2018},\n organization={IEEE}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 26, "names": ["B-PPO", "B-KUA", "B-ADV", "B-PRN", "B-VBI", "B-PAR", "B-VBP", "B-NNP", "B-UNS", "B-VBT", "B-VBL", "B-NNO", "B-ADJ", "B-PRR", "B-PRK", "B-CCN", "B-$$$", "B-ADK", "B-ART", "B-CSN", "B-NUM", "B-SYM", "B-INT", "B-NEG", "B-PRI", "B-VBE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "posp", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2751348, "num_examples": 6720, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 343924, "num_examples": 840, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 350720, "num_examples": 840, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/posp_pos-prosa/train_preprocess.txt": {"num_bytes": 1922251, "checksum": "667d16f7e3e424fc1bf3d1aff8d99a0045ff07ca382c467f612d9ddc420803a1"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/posp_pos-prosa/valid_preprocess.txt": {"num_bytes": 239887, "checksum": "9af8289324391466c282132a8b47323b38e84daa3f0dd9b0d972da0a9f0970a9"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/posp_pos-prosa/test_preprocess_masked_label.txt": {"num_bytes": 245068, "checksum": "d120967bc7eaba5db88654902e2958af1ae7121e6a55b396e639cb8cf1d330d0"}}, "download_size": 2407206, "post_processing_size": null, "dataset_size": 3445992, "size_in_bytes": 5853198}, "bapos": {"description": "This POS tagging dataset (Dinakaramani et al., 2014) contains about 1000 sentences, collected from the PAN Localization\nProject. In this dataset, each word is tagged by one of 23 POS tag classes. Data splitting used in this benchmark follows\nthe experimental setting used by Kurniawan and Aji (2018)", "citation": "@inproceedings{dinakaramani2014designing,\n title={Designing an Indonesian Part of Speech Tagset and Manually Tagged Indonesian Corpus},\n author={Arawinda Dinakaramani, Fam Rashel, Andry Luthfi, and Ruli Manurung},\n booktitle={Proceedings of the 2014 International Conference on Asian Language Processing (IALP)},\n pages={66--69},\n year={2014},\n organization={IEEE}\n}\n@inproceedings{kurniawan2019toward,\n title={Toward a Standardized and More Accurate Indonesian Part-of-Speech Tagging},\n author={Kemal Kurniawan and Alham Fikri Aji},\n booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},\n pages={303--307},\n year={2018},\n organization={IEEE}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 41, "names": ["B-PR", "B-CD", "I-PR", "B-SYM", "B-JJ", "B-DT", "I-UH", "I-NND", "B-SC", "I-WH", "I-IN", "I-NNP", "I-VB", "B-IN", "B-NND", "I-CD", "I-JJ", "I-X", "B-OD", "B-RP", "B-RB", "B-NNP", "I-RB", "I-Z", "B-CC", "B-NEG", "B-VB", "B-NN", "B-MD", "B-UH", "I-NN", "B-PRP", "I-SC", "B-Z", "I-PRP", "I-OD", "I-SYM", "B-WH", "B-FW", "I-CC", "B-X"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "bapos", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3772459, "num_examples": 8000, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 460058, "num_examples": 1000, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 474368, "num_examples": 1029, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/bapos_pos-idn/train_preprocess.txt": {"num_bytes": 2450176, "checksum": "260f0808b494335c77b5475348e016d7b64fdea1fbd07b45a232b84bc3c300b4"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/bapos_pos-idn/valid_preprocess.txt": {"num_bytes": 300182, "checksum": "599eebd10e01eaa452625939ff022c527abebedac4a91e84cddfa57abccc3a12"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/bapos_pos-idn/test_preprocess_masked_label.txt": {"num_bytes": 333663, "checksum": "7dfabd5e212483677e17dceec50d0cd9854a206a7a6e3f99168b446ee2eff5e6"}}, "download_size": 3084021, "post_processing_size": null, "dataset_size": 4706885, "size_in_bytes": 7790906}, "terma": {"description": "This span-extraction dataset is collected from the hotel aggregator platform, AiryRooms (Septiandri and Sutiono, 2019;\nFernando et al., 2019). The dataset consists of thousands of hotel reviews, which each contain a span label for aspect\nand sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use\nInside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment.", "citation": "@article{winatmoko2019aspect,\n title={Aspect and Opinion Term Extraction for Hotel Reviews Using Transfer Learning and Auxiliary Labels},\n author={Yosef Ardhito Winatmoko, Ali Akbar Septiandri, Arie Pratama Sutiono},\n journal={arXiv preprint arXiv:1909.11879},\n year={2019}\n}\n@article{fernando2019aspect,\n title={Aspect and Opinion Terms Extraction Using Double Embeddings and Attention Mechanism for Indonesian Hotel Reviews},\n author={Jordhy Fernando, Masayu Leylia Khodra, Ali Akbar Septiandri},\n journal={arXiv preprint arXiv:1908.04899},\n year={2019}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "seq_label": {"feature": {"num_classes": 5, "names": ["I-SENTIMENT", "O", "I-ASPECT", "B-SENTIMENT", "B-ASPECT"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "terma", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 817983, "num_examples": 3000, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 276335, "num_examples": 1000, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 265922, "num_examples": 1000, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/terma_term-extraction-airy/train_preprocess.txt": {"num_bytes": 521607, "checksum": "5da1a89793eb0ea996874212e551a766d31f860c3797a186729bc6829b6a5610"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/terma_term-extraction-airy/valid_preprocess.txt": {"num_bytes": 175787, "checksum": "7bc98ac730da9beaba2c65ec2332a2e9c1953f060fa4cbf50a734274abbdfa60"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/terma_term-extraction-airy/test_preprocess_masked_label.txt": {"num_bytes": 119428, "checksum": "9933206180014ed264bfd8ade1468b2c4bb1a40698925e34d6ae8bec63a48b7c"}}, "download_size": 816822, "post_processing_size": null, "dataset_size": 1360240, "size_in_bytes": 2177062}, "keps": {"description": "This keyphrase extraction dataset (Mahfuzh et al., 2019) consists of text from Twitter discussing\nbanking products and services and is written in the Indonesian language. A phrase containing\nimportant information is considered a keyphrase. Text may contain one or more keyphrases since\nimportant phrases can be located at different positions. The dataset follows the IOB chunking format,\nwhich represents the position of the keyphrase.", "citation": "@inproceedings{mahfuzh2019improving,\n title={Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features},\n author={Miftahul Mahfuzh, Sidik Soleman, and Ayu Purwarianti},\n booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},\n pages={1--6},\n year={2019},\n organization={IEEE}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "seq_label": {"feature": {"num_classes": 3, "names": ["O", "B", "I"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "keps", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 173961, "num_examples": 800, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 42961, "num_examples": 200, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 66762, "num_examples": 247, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/keps_keyword-extraction-prosa/train_preprocess.txt": {"num_bytes": 82084, "checksum": "c863e6e4d4a16f1026aca198dd35ca018115e061c3352ee9268cd7b6b0f9f298"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/keps_keyword-extraction-prosa/valid_preprocess.txt": {"num_bytes": 20291, "checksum": "e3a3d38c9aaab0981b480a6d6ff6579e4453995b64e67744b7260a79f6fc38f3"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/keps_keyword-extraction-prosa/test_preprocess_masked_label.txt": {"num_bytes": 31667, "checksum": "9731cbc128169f1a549aaf2b516bc0657f8170dfcc8c124cbdf2d5031fcb5de6"}}, "download_size": 134042, "post_processing_size": null, "dataset_size": 283684, "size_in_bytes": 417726}, "nergrit": {"description": "This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.\nThe dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and\nORGANIZATION (name of organization).", "citation": "@online{nergrit2019,\n title={NERGrit Corpus},\n author={NERGrit Developers},\n year={2019},\n url={https://github.com/grit-id/nergrit-corpus}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 7, "names": ["I-PERSON", "B-ORGANISATION", "I-ORGANISATION", "B-PLACE", "I-PLACE", "O", "B-PERSON"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "nergrit", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 960710, "num_examples": 1672, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 119567, "num_examples": 209, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 117274, "num_examples": 209, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nergrit_ner-grit/train_preprocess.txt": {"num_bytes": 522268, "checksum": "4bbef1355fad21b405b5c511a7c80331a5ee71c91db9b82dc03efda5cb99f964"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nergrit_ner-grit/valid_preprocess.txt": {"num_bytes": 64884, "checksum": "330ee7307f40f5e999110e02390243c07180b78c2e2a06f8a529ab46b0f4e907"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nergrit_ner-grit/test_preprocess_masked_label.txt": {"num_bytes": 54113, "checksum": "08eed4592b26532fa08a0d04f929e3c4d72add5680e42eaf8f4a0ee9687b5289"}}, "download_size": 641265, "post_processing_size": null, "dataset_size": 1197551, "size_in_bytes": 1838816}, "nerp": {"description": "This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites.\nThere are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand),\nEVT (name of the event), and FNB (name of food and beverage). The NERP dataset uses the IOB chunking format.", "citation": "@inproceedings{hoesen2018investigating,\n title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},\n author={Devin Hoesen and Ayu Purwarianti},\n booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},\n pages={35--38},\n year={2018},\n organization={IEEE}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 11, "names": ["I-PPL", "B-EVT", "B-PLC", "I-IND", "B-IND", "B-FNB", "I-EVT", "B-PPL", "I-PLC", "O", "I-FNB"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "nerp", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2751348, "num_examples": 6720, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 343924, "num_examples": 840, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 350720, "num_examples": 840, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nerp_ner-prosa/train_preprocess.txt": {"num_bytes": 1387891, "checksum": "0361c2b4a40298f00c027ad80b3c29a1f2a14c3d6fea91ec292820af25821a2d"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nerp_ner-prosa/valid_preprocess.txt": {"num_bytes": 172835, "checksum": "5e08679148ada73a809a52fbe9695ac8d9b0acfe4e3e8f686fa6ab16048b4863"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nerp_ner-prosa/test_preprocess_masked_label.txt": {"num_bytes": 165260, "checksum": "3242d38bd17a3d16e2d29f19cee7d59c56e5edb4c1e5dcd90e57ba045b06233c"}}, "download_size": 1725986, "post_processing_size": null, "dataset_size": 3445992, "size_in_bytes": 5171978}, "facqa": {"description": "The goal of the FacQA dataset is to find the answer to a question from a provided short passage from\na news article (Purwarianti et al., 2007). Each row in the FacQA dataset consists of a question,\na short passage, and a label phrase, which can be found inside the corresponding short passage.\nThere are six categories of questions: date, location, name, organization, person, and quantitative.", "citation": "@inproceedings{purwarianti2007machine,\n title={A Machine Learning Approach for Indonesian Question Answering System},\n author={Ayu Purwarianti, Masatoshi Tsuchiya, and Seiichi Nakagawa},\n booktitle={Proceedings of Artificial Intelligence and Applications },\n pages={573--578},\n year={2007}\n}\n@inproceedings{wilie2020indonlu,\ntitle = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},\nauthors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},\nbooktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},\nyear={2020}\n}\n", "homepage": "https://www.indobenchmark.com/", "license": "", "features": {"question": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "passage": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "seq_label": {"feature": {"num_classes": 3, "names": ["O", "B", "I"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "indonlu", "config_name": "facqa", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2454368, "num_examples": 2495, "dataset_name": "indonlu"}, "validation": {"name": "validation", "num_bytes": 306249, "num_examples": 311, "dataset_name": "indonlu"}, "test": {"name": "test", "num_bytes": 306831, "num_examples": 311, "dataset_name": "indonlu"}}, "download_checksums": {"https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/facqa_qa-factoid-itb/train_preprocess.csv": {"num_bytes": 2073762, "checksum": "cc738d6ec42cfb76eb36899616361c5d789ff8408afc94fbc2cdd102e7ce00cc"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/facqa_qa-factoid-itb/valid_preprocess.csv": {"num_bytes": 258917, "checksum": "ad0fa5056b141b4898f6de37f68416fe4e01c58e1e960a97e45b3b6b7cdfb5fd"}, "https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/facqa_qa-factoid-itb/test_preprocess_masked_label.csv": {"num_bytes": 259289, "checksum": "652e330c83eeaa2f0e965eb0fa75e8889cc7199a23f16843103e4e78946f7583"}}, "download_size": 2591968, "post_processing_size": null, "dataset_size": 3067448, "size_in_bytes": 5659416}}
dummy/bapos/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f958a423fa32b20ae1a11d091817c4e3ab08561108f6d6d14c7da7b65f89b255
3
+ size 1361
dummy/casa/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33ab57fac8ea1abd497cb602a5beb935e07ccc6e9d76cbd3e53074a72fa386e0
3
+ size 7223
dummy/emot/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a755b6dbd548d95685abe9150e8c3da64a5f5d5a9316eb994d81b50628c816e6
3
+ size 14443
dummy/facqa/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3ab7920ba1982cac09cdfdeb4fa53cc342101517ecb2a5bf41ffcf0d134f1cf
3
+ size 27265
dummy/hoasa/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42583d60cfb3a3cdae37ea4c8fb686695bc6f6db640560e939b5de7d59b802fa
3
+ size 8962
dummy/keps/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac7e4c27258fd41b318c89072f3eab0b000dbf7d8ab3572ed7e3de0fce1698ab
3
+ size 1231
dummy/nergrit/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a59981a2550e9a8f9260031020692f0a252c97f44f910b840e360707aafa1d9c
3
+ size 1300
dummy/nerp/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c85cba140f07caa7b69dcfd688fa4bead81a1bc467dbcf6a08a0294eff07fcad
3
+ size 1281
dummy/posp/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:001ca135f5f56520eee6439061666a248141da72b7c03152151135e92637c06d
3
+ size 1393
dummy/smsa/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5ea74058203e5ae4245d64001f72c901cc8d2cf05572b75db990a487dececa9
3
+ size 12315
dummy/terma/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b4d04f15519467964245a6e56ad3b0887268bc1a18a17eb893c538ce6027418
3
+ size 1242
dummy/wrete/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c11985df5ba7204d264d8edb1447494fbd8dc8229404d119023b068f1f0039a0
3
+ size 17121
indonlu.py ADDED
@@ -0,0 +1,644 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import ast
20
+ import csv
21
+ import textwrap
22
+
23
+ import six
24
+
25
+ import datasets
26
+
27
+
28
+ _INDONLU_CITATION = """\
29
+ @inproceedings{wilie2020indonlu,
30
+ title = {{IndoNLU}: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
31
+ authors={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
32
+ booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
33
+ year={2020}
34
+ }
35
+ """
36
+
37
+ _INDONLU_DESCRIPTION = """\
38
+ The IndoNLU benchmark is a collection of resources for training, evaluating, \
39
+ and analyzing natural language understanding systems for Bahasa Indonesia.
40
+ """
41
+
42
+ _INDONLU_HOMEPAGE = "https://www.indobenchmark.com/"
43
+
44
+ _INDONLU_LICENSE = "https://raw.githubusercontent.com/indobenchmark/indonlu/master/LICENSE"
45
+
46
+
47
+ class IndonluConfig(datasets.BuilderConfig):
48
+ """BuilderConfig for IndoNLU"""
49
+
50
+ def __init__(
51
+ self,
52
+ text_features,
53
+ label_column,
54
+ label_classes,
55
+ train_url,
56
+ valid_url,
57
+ test_url,
58
+ citation,
59
+ **kwargs,
60
+ ):
61
+ """BuilderConfig for IndoNLU.
62
+
63
+ Args:
64
+ text_features: `dict[string, string]`, map from the name of the feature
65
+ dict for each text field to the name of the column in the txt/csv/tsv file
66
+ label_column: `string`, name of the column in the txt/csv/tsv file corresponding
67
+ to the label
68
+ label_classes: `list[string]`, the list of classes if the label is categorical
69
+ train_url: `string`, url to train file from
70
+ valid_url: `string`, url to valid file from
71
+ test_url: `string`, url to test file from
72
+ citation: `string`, citation for the data set
73
+ **kwargs: keyword arguments forwarded to super.
74
+ """
75
+ super(IndonluConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
76
+ self.text_features = text_features
77
+ self.label_column = label_column
78
+ self.label_classes = label_classes
79
+ self.train_url = train_url
80
+ self.valid_url = valid_url
81
+ self.test_url = test_url
82
+ self.citation = citation
83
+
84
+
85
+ class Indonlu(datasets.GeneratorBasedBuilder):
86
+ """Indonesian Natural Language Understanding (IndoNLU) benchmark"""
87
+
88
+ BUILDER_CONFIGS = [
89
+ IndonluConfig(
90
+ name="emot",
91
+ description=textwrap.dedent(
92
+ """\
93
+ An emotion classification dataset collected from the social media
94
+ platform Twitter (Saputri et al., 2018). The dataset consists of
95
+ around 4000 Indonesian colloquial language tweets, covering five
96
+ different emotion labels: sadness, anger, love, fear, and happy."""
97
+ ),
98
+ text_features={"tweet": "tweet"},
99
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
100
+ label_classes=["sadness", "anger", "love", "fear", "happy"],
101
+ label_column="label",
102
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/emot_emotion-twitter/train_preprocess.csv",
103
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/emot_emotion-twitter/valid_preprocess.csv",
104
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/emot_emotion-twitter/test_preprocess_masked_label.csv",
105
+ citation=textwrap.dedent(
106
+ """\
107
+ @inproceedings{saputri2018emotion,
108
+ title={Emotion Classification on Indonesian Twitter Dataset},
109
+ author={Mei Silviana Saputri, Rahmad Mahendra, and Mirna Adriani},
110
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},
111
+ pages={90--95},
112
+ year={2018},
113
+ organization={IEEE}
114
+ }"""
115
+ ),
116
+ ),
117
+ IndonluConfig(
118
+ name="smsa",
119
+ description=textwrap.dedent(
120
+ """\
121
+ This sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019)
122
+ is a collection of comments and reviews in Indonesian obtained from multiple online
123
+ platforms. The text was crawled and then annotated by several Indonesian linguists
124
+ to construct this dataset. There are three possible sentiments on the SmSA
125
+ dataset: positive, negative, and neutral."""
126
+ ),
127
+ text_features={"text": "text"},
128
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
129
+ label_classes=["positive", "neutral", "negative"],
130
+ label_column="label",
131
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/smsa_doc-sentiment-prosa/train_preprocess.tsv",
132
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/smsa_doc-sentiment-prosa/valid_preprocess.tsv",
133
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/smsa_doc-sentiment-prosa/test_preprocess_masked_label.tsv",
134
+ citation=textwrap.dedent(
135
+ """\
136
+ @inproceedings{purwarianti2019improving,
137
+ title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector},
138
+ author={Ayu Purwarianti and Ida Ayu Putu Ari Crisdayanti},
139
+ booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
140
+ pages={1--5},
141
+ year={2019},
142
+ organization={IEEE}
143
+ }"""
144
+ ),
145
+ ),
146
+ IndonluConfig(
147
+ name="casa",
148
+ description=textwrap.dedent(
149
+ """\
150
+ An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected
151
+ from multiple Indonesian online automobile platforms (Ilmania et al., 2018). The dataset covers
152
+ six aspects of car quality. We define the task to be a multi-label classification task, where
153
+ each label represents a sentiment for a single aspect with three possible values: positive,
154
+ negative, and neutral."""
155
+ ),
156
+ text_features={"sentence": "sentence"},
157
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
158
+ label_classes=["negative", "neutral", "positive"],
159
+ label_column=["fuel", "machine", "others", "part", "price", "service"],
160
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/casa_absa-prosa/train_preprocess.csv",
161
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/casa_absa-prosa/valid_preprocess.csv",
162
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/casa_absa-prosa/test_preprocess_masked_label.csv",
163
+ citation=textwrap.dedent(
164
+ """\
165
+ @inproceedings{ilmania2018aspect,
166
+ title={Aspect Detection and Sentiment Classification Using Deep Neural Network for Indonesian Aspect-based Sentiment Analysis},
167
+ author={Arfinda Ilmania, Abdurrahman, Samuel Cahyawijaya, Ayu Purwarianti},
168
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},
169
+ pages={62--67},
170
+ year={2018},
171
+ organization={IEEE}
172
+ }"""
173
+ ),
174
+ ),
175
+ IndonluConfig(
176
+ name="hoasa",
177
+ description=textwrap.dedent(
178
+ """\
179
+ An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel
180
+ aggregator platform, AiryRooms (Azhar et al., 2019). The dataset covers ten different aspects of
181
+ hotel quality. Each review is labeled with a single sentiment label for each aspect. There are
182
+ four possible sentiment classes for each sentiment label: positive, negative, neutral, and
183
+ positive-negative. The positivenegative label is given to a review that contains multiple sentiments
184
+ of the same aspect but for different objects (e.g., cleanliness of bed and toilet)."""
185
+ ),
186
+ text_features={"sentence": "sentence"},
187
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
188
+ label_classes=["neg", "neut", "pos", "neg_pos"],
189
+ label_column=[
190
+ "ac",
191
+ "air_panas",
192
+ "bau",
193
+ "general",
194
+ "kebersihan",
195
+ "linen",
196
+ "service",
197
+ "sunrise_meal",
198
+ "tv",
199
+ "wifi",
200
+ ],
201
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/hoasa_absa-airy/train_preprocess.csv",
202
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/hoasa_absa-airy/valid_preprocess.csv",
203
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/hoasa_absa-airy/test_preprocess_masked_label.csv",
204
+ citation=textwrap.dedent(
205
+ """\
206
+ @inproceedings{azhar2019multi,
207
+ title={Multi-label Aspect Categorization with Convolutional Neural Networks and Extreme Gradient Boosting},
208
+ author={A. N. Azhar, M. L. Khodra, and A. P. Sutiono}
209
+ booktitle={Proceedings of the 2019 International Conference on Electrical Engineering and Informatics (ICEEI)},
210
+ pages={35--40},
211
+ year={2019}
212
+ }"""
213
+ ),
214
+ ),
215
+ IndonluConfig(
216
+ name="wrete",
217
+ description=textwrap.dedent(
218
+ """\
219
+ The Wiki Revision Edits Textual Entailment dataset (Setya and Mahendra, 2018) consists of 450 sentence pairs
220
+ constructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic
221
+ relations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be
222
+ derived from the first one, and not entailed otherwise."""
223
+ ),
224
+ text_features={
225
+ "premise": "premise",
226
+ "hypothesis": "hypothesis",
227
+ "category": "category",
228
+ },
229
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
230
+ label_classes=["NotEntail", "Entail_or_Paraphrase"],
231
+ label_column="label",
232
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/wrete_entailment-ui/train_preprocess.csv",
233
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/wrete_entailment-ui/valid_preprocess.csv",
234
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/wrete_entailment-ui/test_preprocess_masked_label.csv",
235
+ citation=textwrap.dedent(
236
+ """\
237
+ @inproceedings{setya2018semi,
238
+ title={Semi-supervised Textual Entailment on Indonesian Wikipedia Data},
239
+ author={Ken Nabila Setya and Rahmad Mahendra},
240
+ booktitle={Proceedings of the 2018 International Conference on Computational Linguistics and Intelligent Text Processing (CICLing)},
241
+ year={2018}
242
+ }"""
243
+ ),
244
+ ),
245
+ IndonluConfig(
246
+ name="posp",
247
+ description=textwrap.dedent(
248
+ """\
249
+ This Indonesian part-of-speech tagging (POS) dataset (Hoesen and Purwarianti, 2018) is collected from Indonesian
250
+ news websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the
251
+ Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention."""
252
+ ),
253
+ text_features={"tokens": "tokens"},
254
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
255
+ label_classes=[
256
+ "B-PPO",
257
+ "B-KUA",
258
+ "B-ADV",
259
+ "B-PRN",
260
+ "B-VBI",
261
+ "B-PAR",
262
+ "B-VBP",
263
+ "B-NNP",
264
+ "B-UNS",
265
+ "B-VBT",
266
+ "B-VBL",
267
+ "B-NNO",
268
+ "B-ADJ",
269
+ "B-PRR",
270
+ "B-PRK",
271
+ "B-CCN",
272
+ "B-$$$",
273
+ "B-ADK",
274
+ "B-ART",
275
+ "B-CSN",
276
+ "B-NUM",
277
+ "B-SYM",
278
+ "B-INT",
279
+ "B-NEG",
280
+ "B-PRI",
281
+ "B-VBE",
282
+ ],
283
+ label_column="pos_tags",
284
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/posp_pos-prosa/train_preprocess.txt",
285
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/posp_pos-prosa/valid_preprocess.txt",
286
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/posp_pos-prosa/test_preprocess_masked_label.txt",
287
+ citation=textwrap.dedent(
288
+ """\
289
+ @inproceedings{hoesen2018investigating,
290
+ title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
291
+ author={Devin Hoesen and Ayu Purwarianti},
292
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
293
+ pages={35--38},
294
+ year={2018},
295
+ organization={IEEE}
296
+ }"""
297
+ ),
298
+ ),
299
+ IndonluConfig(
300
+ name="bapos",
301
+ description=textwrap.dedent(
302
+ """\
303
+ This POS tagging dataset (Dinakaramani et al., 2014) contains about 1000 sentences, collected from the PAN Localization
304
+ Project. In this dataset, each word is tagged by one of 23 POS tag classes. Data splitting used in this benchmark follows
305
+ the experimental setting used by Kurniawan and Aji (2018)"""
306
+ ),
307
+ text_features={"tokens": "tokens"},
308
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
309
+ label_classes=[
310
+ "B-PR",
311
+ "B-CD",
312
+ "I-PR",
313
+ "B-SYM",
314
+ "B-JJ",
315
+ "B-DT",
316
+ "I-UH",
317
+ "I-NND",
318
+ "B-SC",
319
+ "I-WH",
320
+ "I-IN",
321
+ "I-NNP",
322
+ "I-VB",
323
+ "B-IN",
324
+ "B-NND",
325
+ "I-CD",
326
+ "I-JJ",
327
+ "I-X",
328
+ "B-OD",
329
+ "B-RP",
330
+ "B-RB",
331
+ "B-NNP",
332
+ "I-RB",
333
+ "I-Z",
334
+ "B-CC",
335
+ "B-NEG",
336
+ "B-VB",
337
+ "B-NN",
338
+ "B-MD",
339
+ "B-UH",
340
+ "I-NN",
341
+ "B-PRP",
342
+ "I-SC",
343
+ "B-Z",
344
+ "I-PRP",
345
+ "I-OD",
346
+ "I-SYM",
347
+ "B-WH",
348
+ "B-FW",
349
+ "I-CC",
350
+ "B-X",
351
+ ],
352
+ label_column="pos_tags",
353
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/bapos_pos-idn/train_preprocess.txt",
354
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/bapos_pos-idn/valid_preprocess.txt",
355
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/bapos_pos-idn/test_preprocess_masked_label.txt",
356
+ citation=textwrap.dedent(
357
+ """\
358
+ @inproceedings{dinakaramani2014designing,
359
+ title={Designing an Indonesian Part of Speech Tagset and Manually Tagged Indonesian Corpus},
360
+ author={Arawinda Dinakaramani, Fam Rashel, Andry Luthfi, and Ruli Manurung},
361
+ booktitle={Proceedings of the 2014 International Conference on Asian Language Processing (IALP)},
362
+ pages={66--69},
363
+ year={2014},
364
+ organization={IEEE}
365
+ }
366
+ @inproceedings{kurniawan2019toward,
367
+ title={Toward a Standardized and More Accurate Indonesian Part-of-Speech Tagging},
368
+ author={Kemal Kurniawan and Alham Fikri Aji},
369
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
370
+ pages={303--307},
371
+ year={2018},
372
+ organization={IEEE}
373
+ }"""
374
+ ),
375
+ ),
376
+ IndonluConfig(
377
+ name="terma",
378
+ description=textwrap.dedent(
379
+ """\
380
+ This span-extraction dataset is collected from the hotel aggregator platform, AiryRooms (Septiandri and Sutiono, 2019;
381
+ Fernando et al., 2019). The dataset consists of thousands of hotel reviews, which each contain a span label for aspect
382
+ and sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use
383
+ Inside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment."""
384
+ ),
385
+ text_features={"tokens": "tokens"},
386
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
387
+ label_classes=["I-SENTIMENT", "O", "I-ASPECT", "B-SENTIMENT", "B-ASPECT"],
388
+ label_column="seq_label",
389
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/terma_term-extraction-airy/train_preprocess.txt",
390
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/terma_term-extraction-airy/valid_preprocess.txt",
391
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/terma_term-extraction-airy/test_preprocess_masked_label.txt",
392
+ citation=textwrap.dedent(
393
+ """\
394
+ @article{winatmoko2019aspect,
395
+ title={Aspect and Opinion Term Extraction for Hotel Reviews Using Transfer Learning and Auxiliary Labels},
396
+ author={Yosef Ardhito Winatmoko, Ali Akbar Septiandri, Arie Pratama Sutiono},
397
+ journal={arXiv preprint arXiv:1909.11879},
398
+ year={2019}
399
+ }
400
+ @article{fernando2019aspect,
401
+ title={Aspect and Opinion Terms Extraction Using Double Embeddings and Attention Mechanism for Indonesian Hotel Reviews},
402
+ author={Jordhy Fernando, Masayu Leylia Khodra, Ali Akbar Septiandri},
403
+ journal={arXiv preprint arXiv:1908.04899},
404
+ year={2019}
405
+ }"""
406
+ ),
407
+ ),
408
+ IndonluConfig(
409
+ name="keps",
410
+ description=textwrap.dedent(
411
+ """\
412
+ This keyphrase extraction dataset (Mahfuzh et al., 2019) consists of text from Twitter discussing
413
+ banking products and services and is written in the Indonesian language. A phrase containing
414
+ important information is considered a keyphrase. Text may contain one or more keyphrases since
415
+ important phrases can be located at different positions. The dataset follows the IOB chunking format,
416
+ which represents the position of the keyphrase."""
417
+ ),
418
+ text_features={"tokens": "tokens"},
419
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
420
+ label_classes=["O", "B", "I"],
421
+ label_column="seq_label",
422
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/keps_keyword-extraction-prosa/train_preprocess.txt",
423
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/keps_keyword-extraction-prosa/valid_preprocess.txt",
424
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/keps_keyword-extraction-prosa/test_preprocess_masked_label.txt",
425
+ citation=textwrap.dedent(
426
+ """\
427
+ @inproceedings{mahfuzh2019improving,
428
+ title={Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features},
429
+ author={Miftahul Mahfuzh, Sidik Soleman, and Ayu Purwarianti},
430
+ booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
431
+ pages={1--6},
432
+ year={2019},
433
+ organization={IEEE}
434
+ }"""
435
+ ),
436
+ ),
437
+ IndonluConfig(
438
+ name="nergrit",
439
+ description=textwrap.dedent(
440
+ """\
441
+ This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.
442
+ The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and
443
+ ORGANIZATION (name of organization)."""
444
+ ),
445
+ text_features={"tokens": "tokens"},
446
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
447
+ label_classes=["I-PERSON", "B-ORGANISATION", "I-ORGANISATION", "B-PLACE", "I-PLACE", "O", "B-PERSON"],
448
+ label_column="ner_tags",
449
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nergrit_ner-grit/train_preprocess.txt",
450
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nergrit_ner-grit/valid_preprocess.txt",
451
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nergrit_ner-grit/test_preprocess_masked_label.txt",
452
+ citation=textwrap.dedent(
453
+ """\
454
+ @online{nergrit2019,
455
+ title={NERGrit Corpus},
456
+ author={NERGrit Developers},
457
+ year={2019},
458
+ url={https://github.com/grit-id/nergrit-corpus}
459
+ }"""
460
+ ),
461
+ ),
462
+ IndonluConfig(
463
+ name="nerp",
464
+ description=textwrap.dedent(
465
+ """\
466
+ This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites.
467
+ There are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand),
468
+ EVT (name of the event), and FNB (name of food and beverage). The NERP dataset uses the IOB chunking format."""
469
+ ),
470
+ text_features={"tokens": "tokens"},
471
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
472
+ label_classes=[
473
+ "I-PPL",
474
+ "B-EVT",
475
+ "B-PLC",
476
+ "I-IND",
477
+ "B-IND",
478
+ "B-FNB",
479
+ "I-EVT",
480
+ "B-PPL",
481
+ "I-PLC",
482
+ "O",
483
+ "I-FNB",
484
+ ],
485
+ label_column="ner_tags",
486
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nerp_ner-prosa/train_preprocess.txt",
487
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nerp_ner-prosa/valid_preprocess.txt",
488
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/nerp_ner-prosa/test_preprocess_masked_label.txt",
489
+ citation=textwrap.dedent(
490
+ """\
491
+ @inproceedings{hoesen2018investigating,
492
+ title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
493
+ author={Devin Hoesen and Ayu Purwarianti},
494
+ booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
495
+ pages={35--38},
496
+ year={2018},
497
+ organization={IEEE}
498
+ }"""
499
+ ),
500
+ ),
501
+ IndonluConfig(
502
+ name="facqa",
503
+ description=textwrap.dedent(
504
+ """\
505
+ The goal of the FacQA dataset is to find the answer to a question from a provided short passage from
506
+ a news article (Purwarianti et al., 2007). Each row in the FacQA dataset consists of a question,
507
+ a short passage, and a label phrase, which can be found inside the corresponding short passage.
508
+ There are six categories of questions: date, location, name, organization, person, and quantitative."""
509
+ ),
510
+ text_features={"question": "question", "passage": "passage"},
511
+ # label classes sorted refer to https://github.com/indobenchmark/indonlu/blob/master/utils/data_utils.py
512
+ label_classes=["O", "B", "I"],
513
+ label_column="seq_label",
514
+ train_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/facqa_qa-factoid-itb/train_preprocess.csv",
515
+ valid_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/facqa_qa-factoid-itb/valid_preprocess.csv",
516
+ test_url="https://raw.githubusercontent.com/indobenchmark/indonlu/master/dataset/facqa_qa-factoid-itb/test_preprocess_masked_label.csv",
517
+ citation=textwrap.dedent(
518
+ """\
519
+ @inproceedings{purwarianti2007machine,
520
+ title={A Machine Learning Approach for Indonesian Question Answering System},
521
+ author={Ayu Purwarianti, Masatoshi Tsuchiya, and Seiichi Nakagawa},
522
+ booktitle={Proceedings of Artificial Intelligence and Applications },
523
+ pages={573--578},
524
+ year={2007}
525
+ }"""
526
+ ),
527
+ ),
528
+ ]
529
+
530
+ def _info(self):
531
+ sentence_features = ["terma", "keps", "facqa"]
532
+ ner_ = ["nergrit", "nerp"]
533
+ pos_ = ["posp", "bapos"]
534
+
535
+ if self.config.name in (sentence_features + ner_ + pos_):
536
+ features = {
537
+ text_feature: datasets.Sequence(datasets.Value("string"))
538
+ for text_feature in six.iterkeys(self.config.text_features)
539
+ }
540
+ else:
541
+ features = {
542
+ text_feature: datasets.Value("string") for text_feature in six.iterkeys(self.config.text_features)
543
+ }
544
+
545
+ if self.config.label_classes:
546
+ if self.config.name in sentence_features:
547
+ features["seq_label"] = datasets.Sequence(
548
+ datasets.features.ClassLabel(names=self.config.label_classes)
549
+ )
550
+ elif self.config.name in ner_:
551
+ features["ner_tags"] = datasets.Sequence(datasets.features.ClassLabel(names=self.config.label_classes))
552
+ elif self.config.name in pos_:
553
+ features["pos_tags"] = datasets.Sequence(datasets.features.ClassLabel(names=self.config.label_classes))
554
+ elif self.config.name == "casa" or self.config.name == "hoasa":
555
+ for label in self.config.label_column:
556
+ features[label] = datasets.features.ClassLabel(names=self.config.label_classes)
557
+ else:
558
+ features["label"] = datasets.features.ClassLabel(names=self.config.label_classes)
559
+
560
+ return datasets.DatasetInfo(
561
+ description=self.config.description,
562
+ features=datasets.Features(features),
563
+ homepage=_INDONLU_HOMEPAGE,
564
+ citation=self.config.citation + "\n" + _INDONLU_CITATION,
565
+ )
566
+
567
+ def _split_generators(self, dl_manager):
568
+ """Returns SplitGenerators."""
569
+ train_path = dl_manager.download_and_extract(self.config.train_url)
570
+ valid_path = dl_manager.download_and_extract(self.config.valid_url)
571
+ test_path = dl_manager.download_and_extract(self.config.test_url)
572
+ return [
573
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
574
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": valid_path}),
575
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
576
+ ]
577
+
578
+ def _generate_examples(self, filepath):
579
+ """ Yields examples. """
580
+ csv_file = ["emot", "wrete", "facqa", "casa", "hoasa"]
581
+ tsv_file = ["smsa"]
582
+ txt_file = ["terma", "keps"]
583
+ txt_file_pos = ["posp", "bapos"]
584
+ txt_file_ner = ["nergrit", "nerp"]
585
+
586
+ with open(filepath, encoding="utf-8") as f:
587
+
588
+ if self.config.name in csv_file:
589
+ reader = csv.reader(f, delimiter=",", quotechar='"', quoting=csv.QUOTE_ALL)
590
+ next(reader) # skip first row which is header
591
+
592
+ for id_, row in enumerate(reader):
593
+ if self.config.name == "emot":
594
+ label, tweet = row
595
+ yield id_, {"tweet": tweet, "label": label}
596
+ elif self.config.name == "wrete":
597
+ premise, hypothesis, category, label = row
598
+ yield id_, {"premise": premise, "hypothesis": hypothesis, "category": category, "label": label}
599
+ elif self.config.name == "facqa":
600
+ question, passage, seq_label = row
601
+ yield id_, {
602
+ "question": ast.literal_eval(question),
603
+ "passage": ast.literal_eval(passage),
604
+ "seq_label": ast.literal_eval(seq_label),
605
+ }
606
+ elif self.config.name == "casa" or self.config.name == "hoasa":
607
+ sentence, *labels = row
608
+ sentence = {"sentence": sentence}
609
+ label = {l: labels[idx] for idx, l in enumerate(self.config.label_column)}
610
+ yield id_, {**sentence, **label}
611
+ elif self.config.name in tsv_file:
612
+ reader = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
613
+
614
+ for id_, row in enumerate(reader):
615
+ if self.config.name == "smsa":
616
+ text, label = row
617
+ yield id_, {"text": text, "label": label}
618
+ elif self.config.name in (txt_file + txt_file_pos + txt_file_ner):
619
+ id_ = 0
620
+ tokens = []
621
+ seq_label = []
622
+ for line in f:
623
+ if len(line.strip()) > 0:
624
+ token, label = line[:-1].split("\t")
625
+ tokens.append(token)
626
+ seq_label.append(label)
627
+ else:
628
+ if self.config.name in txt_file:
629
+ yield id_, {"tokens": tokens, "seq_label": seq_label}
630
+ elif self.config.name in txt_file_pos:
631
+ yield id_, {"tokens": tokens, "pos_tags": seq_label}
632
+ elif self.config.name in txt_file_ner:
633
+ yield id_, {"tokens": tokens, "ner_tags": seq_label}
634
+ id_ += 1
635
+ tokens = []
636
+ seq_label = []
637
+ # add last example
638
+ if tokens:
639
+ if self.config.name in txt_file:
640
+ yield id_, {"tokens": tokens, "seq_label": seq_label}
641
+ elif self.config.name in txt_file_pos:
642
+ yield id_, {"tokens": tokens, "pos_tags": seq_label}
643
+ elif self.config.name in txt_file_ner:
644
+ yield id_, {"tokens": tokens, "ner_tags": seq_label}