Datasets:

ArXiv:
Tags:

Here's a download script parallelized using Spark

#22
by srowen - opened

This UDF will do the work, and return the result of the download to check if it failed:

import subprocess
from pyspark.sql.functions import udf

@udf('int')
def download_crawl_url(url):
    out_dir = "/Volumes/datasets/redpajama_v2/dataset"
    wget_cmd = ["wget", "--no-host-directories", "--force-directories", "--cut-dirs=2", "--timestamping", "--no-if-modified-since", "--no-show-progress", url]
    result = subprocess.run(wget_cmd, cwd=out_dir, check=False, capture_output=True)
    if result.returncode != 0:
        print(result.stdout.decode('utf-8'))
        print(result.stderr.decode('utf-8'))
    return result.returncode

This runs the UDF, and breaks the work into more manageable task sizes:

# Filter as desired on 'crawl', or 'lang_bucket', to download just part, after parsing it out of the paths. Example:
# from pyspark.sql.functions import regexp_extract, col
#   ...
#    withColumn("lang_bucket", regexp_extract("url", "[a-z]{2}_(head|middle|tail)", 0)).\
#    filter(col("lang_bucket").contains("_middle")).\

# Change this to whatever file of URLs you want - documents, quality_signals
spark.read.text("/.../document-urls.txt").toDF("url").\
    repartition(200 * spark.sparkContext.defaultParallelism).\
    select("url", download_crawl_url("url").alias("status")).\
    write.mode("overwrite").parquet("/.../crawl_result_temp")

After done you can re-try anything that failed. Important as you're probably going to have a few problems in millions of files.

spark.read.parquet("/.../crawl_result_temp").filter("status <> 0").select("url", download_crawl_url("url").alias("status")).\
  write.mode("overwrite").parquet("/.../crawl_result_temp_2")

And repeat as needed.

(Watch out for a few files that don't work: https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2/discussions/20 )

Together org

that's super useful, thanks @srowen ! Feel free to create a PR for this in the Redpajama github repo :)

Sign up or log in to comment