AI & ML interests

None defined yet.

Organization Card
About org cards

tasksource: 600+ dataset harmonization preprocessings with structured annotations for frictionless extreme multi-task learning and evaluation

Huggingface Datasets is a great library, but it lacks standardization, and datasets require preprocessing work to be used interchangeably. tasksource automates this and facilitates reproducible multi-task learning scaling.

Each dataset is standardized to either MultipleChoice, Classification, or TokenClassification dataset with identical fields. We do not support generation tasks as they are addressed by promptsource. All implemented preprocessings are in tasks.py or tasks.md. A preprocessing is a function that accepts a dataset and returns the standardized dataset. Preprocessing code is concise and human-readable.

GitHub: https://github.com/sileod/tasksource

Installation and usage:

pip install tasksource

from tasksource import list_tasks, load_task
df = list_tasks()

for id in df[df.task_type=="MultipleChoice"].id:
    dataset = load_task(id)
    # all yielded datasets can be used interchangeably

See supported 600+ tasks in tasks.md (+200 MultipleChoice tasks, +200 Classification tasks) and feel free to request a new task. Datasets are downloaded to $HF_DATASETS_CACHE (as any huggingface dataset), so be sure to have >100GB of space there.

Pretrained model:

Text encoder pretrained on tasksource reached state-of-the-art results: 🤗/deberta-v3-base-tasksource-nli

Contact and citation

I can help you integrate tasksource in your experiments. damien.sileo@inria.fr

More details on this article:

@article{sileo2023tasksource,
  title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
  author={Sileo, Damien},
  url= {https://arxiv.org/abs/2301.05948},
  journal={arXiv preprint arXiv:2301.05948},
  year={2023}
}

models

None public yet