Datasets:

Languages:
English
Size Categories:
1K<n<10K
ArXiv:
Tags:
License:
text
stringlengths
1
56
to
in
no
toward
actually
on
is
allele
early
straight
want
n
a
com
half
Love
thunderous
your
mostly
precisely
made
vegan
Doctors
more
have
answers
region
through
exposure
files
as
of
first
came
ago
draft
guys
he
same
bridge
Shrimp
end
best
commander
come
urging
over
sparked
party
Kurdish
last
worked
C
for
oil
patriotic
But
figures
Russian
fan
here
buds
humped
like
B
layout
hearing
match
opened
body
gaming
Service
grown
parent
expected
after
al
rapid
dissertations
would
compiled
whether
hands
view
happening
Windows
some
levels
an
proposal
entered
parents
did
Armstrong
men
still
that
Cambodia
More
final

WikiSpell

Description

This dataset is a custom implementation of the WikiSpell dataset introduced in Character-Aware Models Improve Visual Text Rendering by Liu et al. (2022).

Similarly to the original WikiSpell dataset, the training set is composed of 5000 words taken uniformly from the 50% least common Wiktionary words (taken from this Wiktionary extraction), and 5000 words sampled according to their frequencies taken from the 50% most common Wiktionary words.

The validation and test are splitted in 5 sets, sampled depending on their frequency in the corpus:

  • 1% most common words
  • 1 - 10% most common words
  • 10 - 20% most common words
  • 20 - 30% most common words
  • 50% least common words

Contrary to the original WikiSpell dataset, we compute the frequency of the words using the first 100k sentences from OpenWebText (Skylion007/openwebtext) instead of mC4.

Usage

This dataset is used for testing spelling in Large Language Models. To do so, the labels should be computed like in the following snippet:

sample = ds["train"][0]
label = " ".join(sample["text"])

The labels are not included in the dataset files directly.

Citation

Please cite the original paper introducing WikiSpell if you're using this dataset:

@inproceedings{liu-etal-2023-character,
    title = "Character-Aware Models Improve Visual Text Rendering",
    author = "Liu, Rosanne  and
      Garrette, Dan  and
      Saharia, Chitwan  and
      Chan, William  and
      Roberts, Adam  and
      Narang, Sharan  and
      Blok, Irina  and
      Mical, Rj  and
      Norouzi, Mohammad  and
      Constant, Noah",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.900",
    pages = "16270--16297",
}
Downloads last month
2
Edit dataset card