polars.testing.parametric.dataframes(cols: Optional[Union[int, column, Sequence[column]]] = None, lazy: bool = False, *, min_cols: int | None = 0, max_cols: int | None = 8, size: int | None = None, min_size: int | None = 0, max_size: int | None = 10, chunked: bool | None = None, include_cols: Optional[Sequence[column]] = None, null_probability: float | dict[str, float] = 0.0, allow_infinities: bool = True, allowed_dtypes: Optional[Sequence[Union[Type[DataType], DataType]]] = None, excluded_dtypes: Optional[Sequence[Union[Type[DataType], DataType]]] = None) SearchStrategy[DataFrame | LazyFrame][source]#

Provides a strategy for producing a DataFrame or LazyFrame.

cols{int, columns}, optional

integer number of columns to create, or a sequence of column objects that describe the desired DataFrame column data.

lazybool, optional

produce a LazyFrame instead of a DataFrame.

min_colsint, optional

if not passing an exact size, can set a minimum here (defaults to 0).

max_colsint, optional

if not passing an exact size, can set a maximum value here (defaults to MAX_COLS).

sizeint, optional

if set, will create a DataFrame of exactly this size (and ignore min/max len params).

min_sizeint, optional

if not passing an exact size, set the minimum number of rows in the DataFrame.

max_sizeint, optional

if not passing an exact size, set the maximum number of rows in the DataFrame.

chunkedbool, optional

ensure that DataFrames with more than row have n_chunks > 1. if omitted, chunking will be randomised at the level of individual Series.

include_cols[column], optional

a list of column objects to include in the generated DataFrame. note that explicitly provided columns are appended onto the list of existing columns (if any present).

null_probability{float, dict[str,float]}, optional

percentage chance (expressed between 0.0 => 1.0) that a generated value is None. this is applied independently of any None values generated by the underlying strategy, and can be applied either on a per-column basis (if given as a {col:pct} dict), or globally. if null_probability is defined on a column, it takes precedence over the global value.

allow_infinitiesbool, optional

optionally disallow generation of +/-inf values for floating-point dtypes.

allowed_dtypes{list,set}, optional

when automatically generating data, allow only these dtypes.

excluded_dtypes{list,set}, optional

when automatically generating data, exclude these dtypes.


In actual usage this is deployed as a unit test decorator, providing a strategy that generates DataFrames or LazyFrames with the given characteristics for the unit test. While developing a strategy/test, it can also be useful to call .example() directly on a given strategy to see concrete instances of the generated data.


Use column or columns to specify the schema of the types of DataFrame to generate. Note: in actual use the strategy is applied as a test decorator, not used standalone.

>>> from polars.testing.parametric import column, columns, dataframes
>>> from hypothesis import given
>>> # generate arbitrary DataFrames
>>> @given(df=dataframes())
... def test_repr(df: pl.DataFrame) -> None:
...     assert isinstance(repr(df), str)
>>> # generate LazyFrames with at least 1 column, random dtypes, and specific size:
>>> df = dataframes(min_cols=1, lazy=True, max_size=5)
>>> df.example()  
>>> # generate DataFrames with known colnames, random dtypes (per test, not per-frame):
>>> df_strategy = dataframes(columns(["x", "y", "z"]))
>>> df.example()  
>>> # generate frames with explicitly named/typed columns and a fixed size:
>>> df_strategy = dataframes(
...     [
...         column("x", dtype=pl.Int32),
...         column("y", dtype=pl.Float64),
...     ],
...     size=2,
... )
>>> df_strategy.example()  
shape: (2, 2)
│ x         ┆ y          │
│ ---       ┆ ---        │
│ i32       ┆ f64        │
│ -15836    ┆ 1.1755e-38 │
│ 575050513 ┆ NaN        │