Lazily read from a local or cloud-hosted parquet file (or files)
Description
This allows the query optimizer to push down predicates and projections to the scan level, thereby potentially reducing memory overhead.
Usage
pl$scan_parquet(
source,
...,
n_rows = NULL,
row_index_name = NULL,
row_index_offset = 0L,
parallel = c("auto", "columns", "row_groups", "prefiltered", "none"),
use_statistics = TRUE,
hive_partitioning = NULL,
glob = TRUE,
schema = NULL,
hive_schema = NULL,
try_parse_hive_dates = TRUE,
rechunk = FALSE,
low_memory = FALSE,
cache = TRUE,
storage_options = NULL,
retries = 2,
include_file_paths = NULL,
allow_missing_columns = FALSE
)
Arguments
source
|
Path(s) to a file or directory. When needing to authenticate for
scanning cloud locations, see the storage_options
parameter.
|
…
|
These dots are for future extensions and must be empty. |
n_rows
|
Stop reading from the source after reading n_rows .
|
row_index_name
|
If not NULL , this will insert a row index column with the
given name.
|
row_index_offset
|
Offset to start the row index column (only used if the name is set by
row_index_name ).
|
parallel
|
This determines the direction and strategy of parallelism.
|
use_statistics
|
Use statistics in the parquet to determine if pages can be skipped from reading. |
hive_partitioning
|
Infer statistics and schema from Hive partitioned sources and use them
to prune reads. If NULL (default), it is automatically
enabled when a single directory is passed, and otherwise disabled.
|
glob
|
Expand path given via globbing rules. |
schema
|
allow_missing_columns .
|
hive_schema
|
list(a = pl$String, b =
pl$Float32) . If NULL (default), the schema of the
Hive partitions is inferred.
|
try_parse_hive_dates
|
Whether to try parsing hive values as date / datetime types. |
rechunk
|
Reallocate to contiguous memory when all chunks/files are parsed. |
low_memory
|
Reduce memory pressure at the expense of performance |
cache
|
Cache the result after reading. |
storage_options
|
Named vector containing options that indicate how to connect to a cloud
provider. The cloud providers currently supported are AWS, GCP, and
Azure. See supported keys here:
storage_options is not provided, Polars will try to
infer the information from environment variables.
|
retries
|
Number of retries if accessing a cloud instance fails. |
include_file_paths
|
Include the path of the source file(s) as a column with this name. |
allow_missing_columns
|
When reading a list of parquet files, if a column existing in the first
file cannot be found in subsequent files, the default behavior is to
raise an error. However, if allow_missing_columns is set to
TRUE , a full-NULL column is returned instead of erroring
for the files that do not contain the column.
|
Value
A polars LazyFrame
Examples
library("polars")
# Write a Parquet file than we can then import as DataFrame
temp_file <- withr::local_tempfile(fileext = ".parquet")
as_polars_df(mtcars)$write_parquet(temp_file)
pl$scan_parquet(temp_file)$collect()
#> shape: (32, 11)
#> ┌──────┬─────┬───────┬───────┬───┬─────┬─────┬──────┬──────┐
#> │ mpg ┆ cyl ┆ disp ┆ hp ┆ … ┆ vs ┆ am ┆ gear ┆ carb │
#> │ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
#> │ f64 ┆ f64 ┆ f64 ┆ f64 ┆ ┆ f64 ┆ f64 ┆ f64 ┆ f64 │
#> ╞══════╪═════╪═══════╪═══════╪═══╪═════╪═════╪══════╪══════╡
#> │ 21.0 ┆ 6.0 ┆ 160.0 ┆ 110.0 ┆ … ┆ 0.0 ┆ 1.0 ┆ 4.0 ┆ 4.0 │
#> │ 21.0 ┆ 6.0 ┆ 160.0 ┆ 110.0 ┆ … ┆ 0.0 ┆ 1.0 ┆ 4.0 ┆ 4.0 │
#> │ 22.8 ┆ 4.0 ┆ 108.0 ┆ 93.0 ┆ … ┆ 1.0 ┆ 1.0 ┆ 4.0 ┆ 1.0 │
#> │ 21.4 ┆ 6.0 ┆ 258.0 ┆ 110.0 ┆ … ┆ 1.0 ┆ 0.0 ┆ 3.0 ┆ 1.0 │
#> │ 18.7 ┆ 8.0 ┆ 360.0 ┆ 175.0 ┆ … ┆ 0.0 ┆ 0.0 ┆ 3.0 ┆ 2.0 │
#> │ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
#> │ 30.4 ┆ 4.0 ┆ 95.1 ┆ 113.0 ┆ … ┆ 1.0 ┆ 1.0 ┆ 5.0 ┆ 2.0 │
#> │ 15.8 ┆ 8.0 ┆ 351.0 ┆ 264.0 ┆ … ┆ 0.0 ┆ 1.0 ┆ 5.0 ┆ 4.0 │
#> │ 19.7 ┆ 6.0 ┆ 145.0 ┆ 175.0 ┆ … ┆ 0.0 ┆ 1.0 ┆ 5.0 ┆ 6.0 │
#> │ 15.0 ┆ 8.0 ┆ 301.0 ┆ 335.0 ┆ … ┆ 0.0 ┆ 1.0 ┆ 5.0 ┆ 8.0 │
#> │ 21.4 ┆ 4.0 ┆ 121.0 ┆ 109.0 ┆ … ┆ 1.0 ┆ 1.0 ┆ 4.0 ┆ 2.0 │
#> └──────┴─────┴───────┴───────┴───┴─────┴─────┴──────┴──────┘
# Write a hive-style partitioned parquet dataset
temp_dir <- withr::local_tempdir()
as_polars_df(mtcars)$write_parquet(temp_dir, partition_by = c("cyl", "gear"))
list.files(temp_dir, recursive = TRUE)
#> [1] "cyl=4.0/gear=3.0/0.parquet" "cyl=4.0/gear=4.0/0.parquet"
#> [3] "cyl=4.0/gear=5.0/0.parquet" "cyl=6.0/gear=3.0/0.parquet"
#> [5] "cyl=6.0/gear=4.0/0.parquet" "cyl=6.0/gear=5.0/0.parquet"
#> [7] "cyl=8.0/gear=3.0/0.parquet" "cyl=8.0/gear=5.0/0.parquet"
# If the path is a folder, Polars automatically tries to detect partitions
# and includes them in the output
pl$scan_parquet(temp_dir)$collect()
#> shape: (32, 11)
#> ┌──────┬─────┬───────┬───────┬───┬─────┬─────┬──────┬──────┐
#> │ mpg ┆ cyl ┆ disp ┆ hp ┆ … ┆ vs ┆ am ┆ gear ┆ carb │
#> │ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
#> │ f64 ┆ f64 ┆ f64 ┆ f64 ┆ ┆ f64 ┆ f64 ┆ f64 ┆ f64 │
#> ╞══════╪═════╪═══════╪═══════╪═══╪═════╪═════╪══════╪══════╡
#> │ 21.5 ┆ 4.0 ┆ 120.1 ┆ 97.0 ┆ … ┆ 1.0 ┆ 0.0 ┆ 3.0 ┆ 1.0 │
#> │ 22.8 ┆ 4.0 ┆ 108.0 ┆ 93.0 ┆ … ┆ 1.0 ┆ 1.0 ┆ 4.0 ┆ 1.0 │
#> │ 24.4 ┆ 4.0 ┆ 146.7 ┆ 62.0 ┆ … ┆ 1.0 ┆ 0.0 ┆ 4.0 ┆ 2.0 │
#> │ 22.8 ┆ 4.0 ┆ 140.8 ┆ 95.0 ┆ … ┆ 1.0 ┆ 0.0 ┆ 4.0 ┆ 2.0 │
#> │ 32.4 ┆ 4.0 ┆ 78.7 ┆ 66.0 ┆ … ┆ 1.0 ┆ 1.0 ┆ 4.0 ┆ 1.0 │
#> │ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
#> │ 15.2 ┆ 8.0 ┆ 304.0 ┆ 150.0 ┆ … ┆ 0.0 ┆ 0.0 ┆ 3.0 ┆ 2.0 │
#> │ 13.3 ┆ 8.0 ┆ 350.0 ┆ 245.0 ┆ … ┆ 0.0 ┆ 0.0 ┆ 3.0 ┆ 4.0 │
#> │ 19.2 ┆ 8.0 ┆ 400.0 ┆ 175.0 ┆ … ┆ 0.0 ┆ 0.0 ┆ 3.0 ┆ 2.0 │
#> │ 15.8 ┆ 8.0 ┆ 351.0 ┆ 264.0 ┆ … ┆ 0.0 ┆ 1.0 ┆ 5.0 ┆ 4.0 │
#> │ 15.0 ┆ 8.0 ┆ 301.0 ┆ 335.0 ┆ … ┆ 0.0 ┆ 1.0 ┆ 5.0 ┆ 8.0 │
#> └──────┴─────┴───────┴───────┴───┴─────┴─────┴──────┴──────┘