Gerard-1705 Gabrielaz commited on
Commit
bdf5373
1 Parent(s): 8cffeb0

Create README.md (#7)

Browse files

- Create README.md (1c96682fa8bc4cee40e020dfc81ae91a97e05119)


Co-authored-by: Gabriela Zuñiga <Gabrielaz@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +238 -0
README.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - es
7
+ tags:
8
+ - climate
9
+ pretty_name: ClimID
10
+ size_categories:
11
+ - 1K<n<10K
12
+ ---
13
+ # Dataset Card for BERTIN-ClimID: BERTIN-Base Climate-related text Identification
14
+ README Spanish Version: [README_ES](https://huggingface.co/datasets/somosnlp/spa_climate_detection/blob/main/README_ES.md)
15
+ Dataset for BERTIN-ClimID was developed as the fusion of different sources (open-source).
16
+
17
+ <!--
18
+
19
+ Nombre del corpus:
20
+
21
+ Suele haber un nombre corto ("pretty name") para las URLs, tablas y demás y uno largo más descriptivo. Para crear el pretty name podéis utilizar acrónimos.
22
+
23
+ Idioma:
24
+
25
+ La Dataset Card puede estar en español o en inglés. Recomendamos que sea en inglés para que la comunidad internacional pueda utilizar vuestro dataset. Teniendo en cuenta que somos una comunidad hispanohablante y no queremos que el idioma sea una barrera, la opción más inclusiva sería escribirla en un idioma y traducirla (automáticamente?) al otro. En el repo entonces habría un README.md (Dataset Card en inglés) que enlazaría a un README_ES.md (Dataset Card en español), o viceversa, README.md y README_EN.md. Si necesitáis apoyo con la traducción os podemos ayudar.
26
+
27
+ Qué incluir en esta sección:
28
+
29
+ Esta sección es como el abstract. Escribir un resumen del corpus y motivación del proyecto (inc. los ODS relacionados). Si el proyecto tiene un logo, incluidlo aquí.
30
+
31
+ Si queréis incluir una versión de la Dataset Card en español, enlazadla aquí al principio (e.g. "A Spanish version of this Dataset Card can be found under [`README_es.md`](URL)"). De manera análoga para el inglés.
32
+
33
+ -->
34
+
35
+ ## Dataset Details
36
+
37
+ ### Dataset Description
38
+
39
+ <!-- Una frase de resumen del dataset. -->
40
+ - **Curated by:** [Gerardo Huerta](https://huggingface.co/Gerard-1705) [Gabriela Zuñiga](https://huggingface.co/Gabrielaz)
41
+ - **Funded by:** SomosNLP, HuggingFace
42
+ - **Language(s):** es-ES, es-PE
43
+ - **License:** cc-by-nc-sa-4.0
44
+
45
+ ### Dataset Sources
46
+
47
+ - **Repository:** [somosnlp/spa_climate_detection](https://huggingface.co/datasets/somosnlp/spa_climate_detection) <!-- Enlace al `main` del repo donde tengáis los scripts, i.e.: o del mismo repo del dataset en HuggingFace o a GitHub. -->
48
+ - **Paper:** [WIP] <!-- Si vais a presentarlo a NAACL poned "WIP", "Comming soon!" o similar. Si no tenéis intención de presentarlo a ninguna conferencia ni escribir un preprint, eliminar. -->
49
+ - **Video presentation:** [Proyecto BERTIN-ClimID](https://www.youtube.com/watch?v=sfXLUP9Ei-o) <!-- Enlace a vuestro vídeo de presentación en YouTube (están todos subidos aquí: https://www.youtube.com/playlist?list=PLTA-KAy8nxaASMwEUWkkTfMaDxWBxn-8J) -->
50
+
51
+ <!-- ### Dataset Versions & Formats [optional]
52
+
53
+ <!-- Si tenéis varias versiones de vuestro dataset podéis combinarlas todas en un mismo repo y simplemente enlazar aquí los commits correspondientes. Ver ejemplo de https://huggingface.co/bertin-project/bertin-roberta-base-spanish -->
54
+
55
+ <!-- Si hay varias formatos del dataset (e.g. sin anotar, pregunta/respuesta, gemma) las podéis enumerar aquí. -->
56
+
57
+ ## Uses
58
+
59
+ <!-- Address questions around how the dataset is intended to be used. -->
60
+
61
+ ### Direct Use
62
+ - News classification: With this model it is possible to classify news headlines related to the areas of climate change.
63
+ - Paper classification: The identification of scientific texts that disclose solutions and/or effects of climate change. For this use, the abstract of each paper can be used for identification.
64
+ -Social Media posts Classification: Classify social media posts (short texts) related or not to climate areas
65
+ <!-- This section describes suitable use cases for the dataset. -->
66
+
67
+
68
+ ### Out-of-Scope Use
69
+ - For the creation of information repositories regarding climate issues.
70
+ - This model can serve as a basis for creating new classification systems for climate solutions to disseminate new efforts to combat climate change in different sectors.
71
+ - Creation of new datasets that address the issue.
72
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
73
+
74
+ ## Dataset Structure
75
+ - **question:** Text
76
+ - **answer:** binary label, if the text is related to climate change or sustainability (1) if the text is not related (0)
77
+ - **domain:** Identifies what topic the text is related to, in our case we have 3 types "climate_change_reports", "miscellaneous_press", "climate_change". Climate change reports refers to the paragraphs that talk about climate change but were extracted from corporate annual reports. Miscellaneous press are paragraphs on various topics extracted from the press. Climate change, all paragraphs that talk about this topic and do not have any special source of information.
78
+ - **Country of origin:** Where this data comes from geographically. We include 3 categories: "global", "Spain", "USA". Global is data that was taken from sources that do not indicate its specific origin but we know that it was taken from data repositories with sources from any country of origin.
79
+ - **Language:** Geographic variety of Spanish used. In this case we used 2 types "es_pe", "es_esp", this is because many of the data had to be translated from English to Spanish, annotations were made using the regional language of the team that collaborated with the translation.
80
+ - **Registration:** Functional variety of language. Within this dataset, 3 types are identified: "cult", "medium", "colloquial" depending on the origin of the data.
81
+ - **Task:** Identifies the purpose for which the input data is intended.
82
+ - **Period:** In what era the language used is located. This dataset uses actual language.
83
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
84
+
85
+ <!--
86
+
87
+ Enumerar y explicar cada columna del corpus. Para cada columna que sea de tipo "categoría" indicar el porcentaje de ejemplos. Podéis encontrar la estructura de corpus propuesta en [estructura_corpus.md](/plantillas_docs_proyectos/estructura_corpus.md).
88
+
89
+ Ejemplo:
90
+
91
+ El corpus cuenta con un total de X ejemplos y contiene las siguientes columnas:
92
+ - `pregunta`
93
+ - `respuesta`
94
+ - `idioma` (variedad geográfica): código ISO del idioma. Distribución: 33% `es_AR`, 33% `es_UY`, 33% `es_PY`
95
+ - `registro` (variedad funcional): `coloquial`, `medio` o `culto`. Distribución: 100% `coloquial.
96
+ - `periodo` (variedad histórica): `actual`, `moderno` (ss. XVIII-XIX), `clásico` (ss. XVI-XVII) o `medieval`. Distribución: 100% `actual`.
97
+ - `dominio`: dominio de la instrucción. Distribución: 10% `sociales_historia`, ...
98
+ - `tarea`: tarea de la instrucción. Distribución: 100% `resumen`.
99
+ - `país_origen`: código ISO del país de origen de los datos. Distribución:
100
+ - `país_referencia`: código ISO del país al que hace referencia la pregunta. Distribución: 55% en blanco, 5% ..., ...
101
+
102
+ -->
103
+
104
+ [More Information Needed]
105
+
106
+ ## Dataset Creation
107
+
108
+ ### Curation Rationale
109
+ The motivation of the dataset creation was to create a repository in Spanish on information or resources on topics such as: climate change, sustainability, global warming, energy, etc; this because we didn't found a dataset like this one. Climate change and global warming are current main problems globally so it's important to fight this harm in all places with all languages, also to bring solutions and share information accesible for everyone
110
+
111
+ <!-- Motivation for the creation of this dataset. -->
112
+
113
+ ### Source Data
114
+ We used several sources of data to make a varied dataset that could work with different types of texts, from articles, news, social media posts and other texts. we included:
115
+
116
+ - Spanish translation of the dataset: [climate Bert] (https://huggingface.co/datasets/climatebert/climate_detection)
117
+ - News in Spanish on topics not related to climate change:[Spanish news headers](https://www.kaggle.com/datasets/kevinmorgado/spanish-news-classification)
118
+ - Translation of opinions related to climate change: [Opinions](https://data.world/crowdflower/sentiment-of-climate-change)
119
+ - Translation of news tweets not related to climate change: [Posts](https://www.kaggle.com/datasets/muhammadmemoon/los-angeles-twitter-news-dataset)
120
+
121
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
122
+
123
+ <!-- Incluir siempre que sea posible enlaces a los datos de origen. -->
124
+
125
+ #### Data Collection and Processing
126
+
127
+ - Spanish translation of the dataset: [climate Bert](https://huggingface.co/datasets/climatebert/climate_detection)
128
+ - News in Spanish on topics not related to climate change:[Spanish news headers](https://www.kaggle.com/datasets/kevinmorgado/spanish-news-classification)
129
+ For this dataset, the column with news and the topics Macroeconomics, Innovation, Regulations, Alliances, Reputation have been discriminated, which have been labeled with (0)
130
+ The dataset also contained the topic Sustainability as a topic but it was removed (we only required unrelated texts).
131
+ - Translation of opinions related to climate change: [Opinions](https://data.world/crowdflower/sentiment-of-climate-change)
132
+ In this dataset all opinions are related to climate change, which is why they were labeled with (1). Data cleaning has been carried out by removing harshtags, usernames and emogis to use only the textual content of the tweets.
133
+ - Translation of news tweets not related to climate change: [Posts](https://www.kaggle.com/datasets/muhammadmemoon/los-angeles-twitter-news-dataset)
134
+ In this dataset the news is categorized and has short length (like opinions) all text is not related to climate change so they were labeled with (0). Data cleaning has been carried out by removing harshtags, usernames and emogis to use only the textual content of the tweets. This dataset has been chosen to balance the amount of related text and to include short texts not related to training.
135
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
136
+
137
+ <!-- Enlazar aquí los scripts y notebooks utilizados para generar el corpus. -->
138
+
139
+ #### Who are the source data producers?
140
+ - Climate bert dataset: Large Companies listed on paper(original dataset).
141
+ - Spanish News: Web scrapping from Bank news sites
142
+ - Opinions from climate change: Tweets extraction
143
+ - Opinions not related to climate change: Tweets of around 2 month of Los Angeles News from twitter.
144
+
145
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
146
+
147
+ ### Annotations
148
+
149
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
150
+
151
+ #### Annotation process
152
+ All the records had the corresponding annotation (related or not related to climate and global warming), but we only changed the text values to binary values (1 / 0)
153
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
154
+
155
+ <!-- Enlazar aquí el notebook utilizado para crear el espacio de anotación de Argilla y la guía de anotación. -->
156
+
157
+
158
+
159
+ #### Who are the annotators?
160
+
161
+ <!-- This section describes the people or systems who created the annotations. -->
162
+
163
+ [More Information Needed]
164
+
165
+ #### Personal and Sensitive Information
166
+ In this case it was not necessary to have an anonymization process
167
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
168
+
169
+
170
+ ## Bias, Risks, and Limitations
171
+
172
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
173
+
174
+ <!-- Aquí podéis mencionar los posibles sesgos heredados según el origen de los datos y de las personas que lo han anotado, hablar del balance de las categorías representadas, los esfuerzos que habéis hecho para intentar mitigar sesgos y riesgos. -->
175
+ At this point, no specific studies have been carried out on biases and limitations, however we make the following notes based on previous experience and model testing:
176
+ - It inherits the biases and limitations of the base model with which it was trained.
177
+ - Direct biases such as the majority use of high-level language in the dataset due to the use of texts extracted from news, legal documentation of companies that can complicate the identification of texts with low-level languages (example: colloquial). To mitigate these biases, diverse opinions on climate change topics extracted from sources such as social networks were included in the dataset, and the labels were additionally rebalanced (see tables below).
178
+ - The dataset inherits other limitations such as: the model loses performance in short texts, this is because most of the texts used in the dataset have a long length of between 200 - 500 words. Once again, an attempt was made to mitigate these limitations with the inclusion of short texts.
179
+
180
+ - train:
181
+
182
+ | Número | Label | % |
183
+ |----------|----------|----------|
184
+ | 1600 | 1 | 55% |
185
+ | 1300 | 0 | 45% |
186
+
187
+ - test:
188
+
189
+ | Número | Label | % |
190
+ |----------|----------|----------|
191
+ | 480 | 1 | 62% |
192
+ | 300 | 0 | 38% |
193
+
194
+
195
+ ### Recommendations
196
+ Our recommendation is to continue adding more samples of spanish text in both large and short length.
197
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.
198
+
199
+ Example:
200
+
201
+ Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
202
+
203
+ [More Information Needed]
204
+
205
+ ## License
206
+ cc-by-nc-sa-4.0 Due to inheritance of the data used in the dataset.
207
+ <!-- Indicar bajo qué licencia se libera el dataset explicando, si no es apache 2.0, a qué se debe la licencia más restrictiva (i.e. herencia de los datos utilizados). -->
208
+
209
+ ## Citation
210
+
211
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
212
+
213
+ **BibTeX:**
214
+ ```
215
+ @misc{BERTIN-ClimID,
216
+ author = {Gerardo Huerta, Gabriela Zuñiga},
217
+ title = {Dataset for BERTIN-ClimID: BERTIN-Base Climate-related text Identification},
218
+ month = Abril,
219
+ year = 2024,
220
+ url = {https://huggingface.co/datasets/somosnlp/spa_climate_detectiona}
221
+ }
222
+
223
+ ```
224
+
225
+ <!--## Glossary [optional]
226
+ If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
227
+ ## More Information
228
+ This project was developed during the [Hackathon #Somos600M](https://somosnlp.org/hackathon) organized by SomosNLP. We thank all event organizers and sponsors for their support during the event.
229
+
230
+ **Team:**
231
+
232
+ - [Gerardo Huerta](https://huggingface.co/Gerard-1705)
233
+ - [Gabriela Zuñiga](https://huggingface.co/Gabrielaz)
234
+
235
+ ## Contact
236
+
237
+ - gerardohuerta1705@gmail.com
238
+ - gabriela.zuniga@unsaac.edu.pe