- source: str | Path,
- infer_schema_length: int | None = 100,
- batch_size: int | None = 1024,
- n_rows: int | None = None,
- low_memory: bool = False,
- rechunk: bool = True,
- row_count_name: str | None = None,
- row_count_offset: int = 0,
Lazily read from a newline delimited JSON file or multiple files via glob patterns.
This allows the query optimizer to push down predicates and projections to the scan level, thereby potentially reducing memory overhead.
Path to a file.
Infer the schema from the first
Number of rows to read in each batch.
Stop reading from JSON file after reading
Reduce memory pressure at the expense of performance.
Reallocate to contiguous memory when all chunks/ files are parsed.
If not None, this will insert a row count column with give name into the DataFrame
Offset to start the row_count column (only use if the name is set)