categorical_features | Indicates the categorical features | default: null |
class_weight | Weights associated with classes in the form `{class_label: weight}`
If not given, all classes are supposed to have weight one
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as `n_samples / (n_classes * np.bincount(y))`
Note that these weights will be multiplied with sample_weight (passed
through the fit method) if `sample_weight` is specified
.. versionadded:: 1.2 | default: null |
early_stopping | If 'auto', early stopping is enabled if the sample size is larger than
10000. If True, early stopping is enabled, otherwise early stopping is
disabled
.. versionadded:: 0.23 | default: "auto" |
interaction_cst | | default: null |
l2_regularization | The L2 regularization parameter. Use 0 for no regularization | default: 0.7126201879134266 |
learning_rate | The learning rate, also known as *shrinkage*. This is used as a
multiplicative factor for the leaves values. Use ``1`` for no
shrinkage | default: 0.10609883405983059 |
loss | | default: "log_loss" |
max_bins | The maximum number of bins to use for non-missing values. Before
training, each feature of the input array `X` is binned into
integer-valued bins, which allows for a much faster training stage
Features with a small number of unique values may use less than
``max_bins`` bins. In addition to the ``max_bins`` bins, one more bin
is always reserved for missing values. Must be no larger than 255 | default: 116 |
max_depth | The maximum depth of each tree. The depth of a tree is the number of
edges to go from the root to the deepest leaf
Depth isn't constrained by default | default: null |
max_iter | The maximum number of iterations of the boosting process, i.e. the
maximum number of trees for binary classification. For multiclass
classification, `n_classes` trees per iteration are built | default: 115 |
max_leaf_nodes | The maximum number of leaves for each tree. Must be strictly greater
than 1. If None, there is no maximum limit | default: 76 |
min_samples_leaf | The minimum number of samples per leaf. For small datasets with less
than a few hundred samples, it is recommended to lower this value
since only very shallow trees would be built | default: 26 |
monotonic_cst | Monotonic constraint to enforce on each feature are specified using the
following integer values:
- 1: monotonic increase
- 0: no constraint
- -1: monotonic decrease
If a dict with str keys, map feature to monotonic constraints by name
If an array, the features are mapped to constraints by position. See
:ref:`monotonic_cst_features_names` for a usage example
The constraints are only valid for binary classifications and hold
over the probability of the positive class
Read more in the :ref:`User Guide `
.. versionadded:: 0.23
.. versionchanged:: 1.2
Accept dict of constraints with feature names as keys
interaction_cst : {"pairwise", "no_interactions"} or sequence of lists/tuples/sets of int, default=None
Specify interaction constraints, the sets of features which can
interact with each other in child node splits
Each item specifies the set of feature indices that are allowed
to interact with each oth... | default: null |
n_iter_no_change | Used to determine when to "early stop". The fitting process is
stopped when none of the last ``n_iter_no_change`` scores are better
than the ``n_iter_no_change - 1`` -th-to-last one, up to some
tolerance. Only used if early stopping is performed | default: 10 |
random_state | Pseudo-random number generator to control the subsampling in the
binning process, and the train/validation data split if early stopping
is enabled
Pass an int for reproducible output across multiple function calls
See :term:`Glossary ` | default: null |
scoring | Scoring parameter to use for early stopping. It can be a single
string (see :ref:`scoring_parameter`) or a callable (see
:ref:`scoring`). If None, the estimator's default scorer
is used. If ``scoring='loss'``, early stopping is checked
w.r.t the loss value. Only used if early stopping is performed | default: "loss" |
tol | The absolute tolerance to use when comparing scores. The higher the
tolerance, the more likely we are to early stop: higher tolerance
means that it will be harder for subsequent iterations to be
considered an improvement upon the reference score | default: 0.4127209821043671 |
validation_fraction | Proportion (or absolute size) of training data to set aside as
validation data for early stopping. If None, early stopping is done on
the training data. Only used if early stopping is performed | default: 0.1 |
verbose | The verbosity level. If not zero, print some information about the
fitting process | default: 0 |
warm_start | When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble. For results to be valid, the
estimator should be re-trained on the same data only
See :term:`the Glossary ` | default: false |