dutch-cola / README.md
bylin's picture
Update README.md
7e8b130 verified
metadata
license: unknown
task_categories:
  - text-classification
language:
  - nl
pretty_name: Dutch CoLA
tag:
  - croissant

Dutch CoLA is a corpus of linguistic acceptability for Dutch: a dataset consisting of sentences in Dutch, each marked as either acceptable (class 1) or unacceptable (class 0). These sentences are collected from existing descriptions of Dutch grammar (see sources below) with expert-annotated acceptability labels.

Dutch CoLA is part of the group project by students of BA Information Science program at the University of Groningen. List of people involved (alphabetic order):

  • Abdi, Silvana
  • Brouwer, Hylke
  • Elzinga, Martine
  • Gunput, Shenza
  • Huisman, Sem
  • Krooneman, Collin
  • Poot, David
  • Top, Jelmer
  • Weideman, Cain
  • Lisa Bylinina (supervisor)

The dataset format roughly follows that of English CoLA and contains the following fields:

  1. Source of the example (encoded as defined below)
  2. Original ID: example number in the original source (encoded as defined below)
  3. Acceptability: 0 (unacceptable) or 1 (acceptable)
  4. Original annotation: acceptability label of the sentence in the original source (can be empty, ‘*’, ‘??’, ‘?’ etc.)
  5. Sentence: the actual sentence. It might appear just like in the source, or can have some linguistic notation removed and/or some material added to complete the example to a full sentence.
  6. Material added: 0 (if the original example didn’t have to be completed to be a full sentence) or 1 (if some material was added compared to the example in the source to make it a full sentence)

The dataset is split into 4 subsets:

  • Train (train.csv): 19893 rows (unbalanced)
  • Validation (val.csv): 2400 rows (balanced)
  • Test (test.csv): 2400 rows (balanced)
  • Intermediate (intermediate.csv): examples with intermediate original acceptability labels (‘?’ and ‘(?)’), the ‘Acceptability’ field contains 0 for all of them. 1199 rows

Legend for source encoding:

Source code Source
SoD-Zw Zwart, J. W. (2011). The syntax of Dutch. Cambridge University Press.
SoD-Noun1 Keizer, E., & Broekhuis, H. (2012). Syntax of Dutch: Nouns and Noun Phrases. Volume 1. Amsterdam University Press.
SoD-Noun2 Dikken, M. D., & Broekhuis, H. (2012). Syntax of Dutch: Nouns and Noun Phrases-Volume 2. Amsterdam University Press.
SoD-Adj Broekhuis, H. (2013). Syntax of Dutch: Adjectives and adjective phrases. Amsterdam University Press.
SoD-Adp Broekhuis, H. (2013). Syntax of Dutch: Adpositions and adpositional phrases. Amsterdam University Press.
SoD-Verb1 Vos, R., Broekhuis, H., & Corver, N. (2015). Syntax of Dutch: verbs and Verb Phrases. Volume 1. Amsterdam University Press.
SoD-Verb2 Broekhuis, H., & Corver, N. (2015). Syntax of Dutch: verbs and Verb Phrases. Volume 2. Amsterdam University Press.
SoD-Verb3 Broekhuis, H., & Corver, N. (2016). Syntax of Dutch: Verbs and Verb Phrases. Volume 3. Amsterdam University Press.
SoD-Coord Broekhuis, H., & Corver, N. (2019). Syntax of Dutch: Coordination and Ellipsis. Amsterdam University Press.

General guidelines that were followed:

  • The corpus contains sentences in Dutch, sentences are labelled 0 (“not acceptable”) and 1 (“acceptable”). These labels correspond to the original judgments in the sources:

    • 0: The original acceptability label was , ?, ??
      • We mark original labels ? and (?) as 0 but they are later split off to a separate file;
    • 1: The original acceptability label was empty.
  • We don’t collect examples that are marked as #, % and $.

  • We ignore sentences that are marked as dialectal or colloquial or otherwise not standard Dutch. We don’t record them at all. Helas!

  • We aim to collect full sentences. If an example in the source is not a full sentence, but a noun phrase or some other fragment (three dogs or smth like …that we called him), we make it into a full sentence in the most neutral possible way, and mark this fact in a separate column.

  • We need only plain text and only the simple conventional way of writing down sentences. This means we remove boldface, italics, underlining etc.

  • If the example contains translation into English, morpheme-by-morpheme glosses etc., we don’t include any of this – just the actual example sentence in Dutch. Sometimes, the example has morphemes separated from each other with a dash – we remove these dashes too. Example:

      Tasman	heeft		Nieuw Zeeland	ontdek-t
    
      Tasman	have:3SG	New Zealand		GE:discover-D
    
      ‘Tasman discovered New Zealand.’
    

We record this sentence as ‘Tasman heeft Nieuw Zeeland ontdekt.’

  • We remove constituency brackets and other linguistic annotation -- we just keep plain Dutch!

  • If the example in the source uses shortcuts to write more than one Dutch sentence in a compact way, with parentheses, slashes etc., we expand this notation and end up with more than one sentence. Example:

      Jan laat [Marie (*te) vertrekken].
    

We record this as two sentences:

    ‘Jan laat Marie vertrekken’ – with acceptability label 1

    ‘Jan laat Marie te vertrekken’ – with acceptability label 0

Another example:

    {Dat / *dit} wist ik niet.

This is recorded as two sentences:

    ‘Dat wist ik niet.’ – with acceptability label 1

    ‘Dit wist ik niet.’ – with acceptability label 0

Yet another example:

    Jan stuurt <de koningin> een verzoekschrift <aan de koningin>.

This is recorded as two sentences:

    ‘Jan stuurt de koningin een verzoekschrift’

    ‘Jan stuurt een verzoekschrift aan de koningin’
  • We ignore empty words that are added to the example as a means to make explicit its hidden grammatical structure. In the following example, PRO is a sort of unpronounced, silent, pronoun:

      Jan probeert [PRO morgen te komen].
    

We record this sentence as ‘Jan probeert morgen te komen’, without ‘PRO’ (or square brackets, for that matter!).

  • If a sentence is indicated as acceptable under some of its potential readings, we record it as acceptable.

      a. *Slaap ze! (sleep them) b. Slaap ze! (sleep well)
    

This sentence is recorded with acceptability label 1.

  • Example numbering: We keep track of the number of the example in the original source. SoD-Zw has example numbers that track the chapter number. We just keep them the way they are. Other sources (SoD-Coord, for instance) restart example numbers from 1 in each chapter. In this case, we prepose chapter number to the original ID. Example 2 then can become 2.2 if it comes from Chapter 2.

The sentences in this dataset are extracted from the published works listed above, and copyright (where applicable) remains with the original authors or publishers. We expect that research use is legal, but make no guarantee of this.