ccp_alpha | Complexity parameter used for Minimal Cost-Complexity Pruning. The
subtree with the largest cost complexity that is smaller than
``ccp_alpha`` will be chosen. By default, no pruning is performed. See
:ref:`minimal_cost_complexity_pruning` for details
.. versionadded:: 0.22 | default: 0.0 |
criterion | | default: "friedman_mse" |
max_depth | The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples | default: null |
max_features | The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split
- If float, then `max_features` is a fraction and
`int(max_features * n_features)` features are considered at each
split
- If "auto", then `max_features=n_features`
- If "sqrt", then `max_features=sqrt(n_features)`
- If "log2", then `max_features=log2(n_features)`
- If None, then `max_features=n_features`
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features | default: 1.0 |
max_leaf_nodes | Grow a tree with ``max_leaf_nodes`` in best-first fashion
Best nodes are defined as relative reduction in impurity
If None then unlimited number of leaf nodes | default: null |
min_impurity_decrease | A node will be split if this split induces a decrease of the impurity
greater than or equal to this value
The weighted impurity decrease equation is the following::
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where ``N`` is the total number of samples, ``N_t`` is the number of
samples at the current node, ``N_t_L`` is the number of samples in the
left child, and ``N_t_R`` is the number of samples in the right child
``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed
.. versionadded:: 0.19 | default: 0.0 |
min_samples_leaf | The minimum number of samples required to be at a leaf node
A split point at any depth will only be considered if it leaves at
least ``min_samples_leaf`` training samples in each of the left and
right branches. This may have the effect of smoothing the model,
especially in regression
- If int, then consider `min_samples_leaf` as the minimum number
- If float, then `min_samples_leaf` is a fraction and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node
.. versionchanged:: 0.18
Added float values for fractions | default: 8 |
min_samples_split | The minimum number of samples required to split an internal node:
- If int, then consider `min_samples_split` as the minimum number
- If float, then `min_samples_split` is a fraction and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split
.. versionchanged:: 0.18
Added float values for fractions | default: 3 |
min_weight_fraction_leaf | The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided | default: 0.0 |
random_state | Controls the randomness of the estimator. The features are always
randomly permuted at each split, even if ``splitter`` is set to
``"best"``. When ``max_features < n_features``, the algorithm will
select ``max_features`` at random at each split before finding the best
split among them. But the best found split may vary across different
runs, even if ``max_features=n_features``. That is the case, if the
improvement of the criterion is identical for several splits and one
split has to be selected at random. To obtain a deterministic behaviour
during fitting, ``random_state`` has to be fixed to an integer
See :term:`Glossary ` for details | default: null |
splitter | | default: "best" |