Skip to content

Write to parquet file


Write to parquet file


  compression = "zstd",
  compression_level = 3,
  statistics = TRUE,
  row_group_size = NULL,
  data_pagesize_limit = NULL


file File path to which the result should be written.
compression String. The compression method. One of:
  • "lz4": fast compression/decompression.
  • "uncompressed"
  • "snappy": this guarantees that the parquet file will be compatible with older parquet readers.
  • "gzip"
  • "lzo"
  • "brotli"
  • "zstd": good compression performance.
compression_level NULL or Integer. The level of compression to use. Only used if method is one of ‘gzip’, ‘brotli’, or ‘zstd’. Higher compression means smaller files on disk:
  • "gzip": min-level: 0, max-level: 10.
  • "brotli": min-level: 0, max-level: 11.
  • "zstd": min-level: 1, max-level: 22.
statistics Whether statistics should be written to the Parquet headers. Possible values:
  • TRUE: enable default set of statistics (default)
  • FALSE: disable all statistics
  • “full”: calculate and write all available statistics.
  • A named list where all values must be TRUE or FALSE, e.g. list(min = TRUE, max = FALSE). Statistics available are “min”, “max”, “distinct_count”, “null_count”.
row_group_size NULL or Integer. Size of the row groups in number of rows. If NULL (default), the chunks of the DataFrame are used. Writing in smaller chunks may reduce memory pressure and improve writing speeds.
data_pagesize_limit NULL or Integer. If NULL (default), the limit will be ~1MB.


Invisibly returns the input DataFrame.



# write table 'mtcars' from mem to parquet
dat = pl$DataFrame(mtcars)

destination = tempfile(fileext = ".parquet")