polars.DataFrame.write_parquet¶
- DataFrame.write_parquet(file: Union[str, pathlib.Path, _io.BytesIO], compression: Optional[Union[Literal['uncompressed', 'snappy', 'gzip', 'lzo', 'brotli', 'lz4', 'zstd'], str]] = 'lz4', statistics: bool = False, use_pyarrow: bool = False, **kwargs: Any) None ¶
Write the DataFrame disk in parquet format.
- Parameters
- file
File path to which the file should be written.
- compression
- Compression method. Choose one of:
“uncompressed” (not supported by pyarrow)
“snappy”
“gzip”
“lzo”
“brotli”
“lz4”
“zstd”
- statistics
Write statistics to the parquet headers. This requires extra compute.
- use_pyarrow
Use C++ parquet implementation vs rust parquet implementation. At the moment C++ supports more features.
- **kwargs are passed to pyarrow.parquet.write_table