Skip to content

Write to parquet file

Description

Write to parquet file

Usage

<DataFrame>$write_parquet(
  file,
  ...,
  compression = "zstd",
  compression_level = 3,
  statistics = FALSE,
  row_group_size = NULL,
  data_pagesize_limit = NULL
)

Arguments

file File path to which the result should be written.
Ignored.
compression String. The compression method. One of:
  • "lz4": fast compression/decompression.
  • "uncompressed"
  • "snappy": this guarantees that the parquet file will be compatible with older parquet readers.
  • "gzip"
  • "lzo"
  • "brotli"
  • "zstd": good compression performance.
compression_level NULL or Integer. The level of compression to use. Only used if method is one of ‘gzip’, ‘brotli’, or ‘zstd’. Higher compression means smaller files on disk:
  • "gzip": min-level: 0, max-level: 10.
  • "brotli": min-level: 0, max-level: 11.
  • "zstd": min-level: 1, max-level: 22.
statistics Logical. Whether compute and write column statistics. This requires extra compute.
row_group_size NULL or Integer. Size of the row groups in number of rows. If NULL (default), the chunks of the DataFrame are used. Writing in smaller chunks may reduce memory pressure and improve writing speeds.
data_pagesize_limit NULL or Integer. If NULL (default), the limit will be ~1MB.

Value

Invisibly returns the input DataFrame.

Examples

library(polars)

# write table 'mtcars' from mem to parquet
dat = pl$DataFrame(mtcars)

destination = tempfile(fileext = ".parquet")
dat$write_parquet(destination)